00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 148 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3650 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.068 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.068 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.103 Fetching changes from the remote Git repository 00:00:00.107 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.158 Using shallow fetch with depth 1 00:00:00.158 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.158 > git --version # timeout=10 00:00:00.207 > git --version # 'git version 2.39.2' 00:00:00.207 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.259 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.259 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.126 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.137 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.149 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.149 > git config core.sparsecheckout # timeout=10 00:00:05.158 > git read-tree -mu HEAD # timeout=10 00:00:05.175 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.194 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.194 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.298 [Pipeline] Start of Pipeline 00:00:05.311 [Pipeline] library 00:00:05.312 Loading library shm_lib@master 00:00:05.312 Library shm_lib@master is cached. Copying from home. 00:00:05.328 [Pipeline] node 00:00:05.345 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.346 [Pipeline] { 00:00:05.357 [Pipeline] catchError 00:00:05.359 [Pipeline] { 00:00:05.373 [Pipeline] wrap 00:00:05.382 [Pipeline] { 00:00:05.390 [Pipeline] stage 00:00:05.392 [Pipeline] { (Prologue) 00:00:05.410 [Pipeline] echo 00:00:05.412 Node: VM-host-SM38 00:00:05.417 [Pipeline] cleanWs 00:00:05.426 [WS-CLEANUP] Deleting project workspace... 00:00:05.426 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.433 [WS-CLEANUP] done 00:00:05.606 [Pipeline] setCustomBuildProperty 00:00:05.677 [Pipeline] httpRequest 00:00:05.979 [Pipeline] echo 00:00:05.981 Sorcerer 10.211.164.20 is alive 00:00:05.990 [Pipeline] retry 00:00:05.991 [Pipeline] { 00:00:06.005 [Pipeline] httpRequest 00:00:06.010 HttpMethod: GET 00:00:06.010 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.011 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.022 Response Code: HTTP/1.1 200 OK 00:00:06.022 Success: Status code 200 is in the accepted range: 200,404 00:00:06.023 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.826 [Pipeline] } 00:00:11.844 [Pipeline] // retry 00:00:11.852 [Pipeline] sh 00:00:12.140 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.157 [Pipeline] httpRequest 00:00:12.991 [Pipeline] echo 00:00:12.993 Sorcerer 10.211.164.20 is alive 00:00:13.005 [Pipeline] retry 00:00:13.007 [Pipeline] { 00:00:13.024 [Pipeline] httpRequest 00:00:13.031 HttpMethod: GET 00:00:13.032 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:13.032 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:13.054 Response Code: HTTP/1.1 200 OK 00:00:13.055 Success: Status code 200 is in the accepted range: 200,404 00:00:13.055 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:14.573 [Pipeline] } 00:01:14.587 [Pipeline] // retry 00:01:14.593 [Pipeline] sh 00:01:14.879 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:18.203 [Pipeline] sh 00:01:18.490 + git -C spdk log --oneline -n5 00:01:18.490 b18e1bd62 version: v24.09.1-pre 00:01:18.490 19524ad45 version: v24.09 00:01:18.490 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:18.490 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:18.490 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:18.512 [Pipeline] withCredentials 00:01:18.523 > git --version # timeout=10 00:01:18.535 > git --version # 'git version 2.39.2' 00:01:18.557 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:18.559 [Pipeline] { 00:01:18.569 [Pipeline] retry 00:01:18.571 [Pipeline] { 00:01:18.586 [Pipeline] sh 00:01:18.870 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:18.884 [Pipeline] } 00:01:18.902 [Pipeline] // retry 00:01:18.908 [Pipeline] } 00:01:18.925 [Pipeline] // withCredentials 00:01:18.940 [Pipeline] httpRequest 00:01:19.314 [Pipeline] echo 00:01:19.316 Sorcerer 10.211.164.20 is alive 00:01:19.324 [Pipeline] retry 00:01:19.326 [Pipeline] { 00:01:19.338 [Pipeline] httpRequest 00:01:19.343 HttpMethod: GET 00:01:19.344 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:19.345 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:19.353 Response Code: HTTP/1.1 200 OK 00:01:19.354 Success: Status code 200 is in the accepted range: 200,404 00:01:19.354 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:52.998 [Pipeline] } 00:01:53.015 [Pipeline] // retry 00:01:53.023 [Pipeline] sh 00:01:53.311 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:55.246 [Pipeline] sh 00:01:55.532 + git -C dpdk log --oneline -n5 00:01:55.532 caf0f5d395 version: 22.11.4 00:01:55.532 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:55.532 dc9c799c7d vhost: fix missing spinlock unlock 00:01:55.532 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:55.532 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:55.552 [Pipeline] writeFile 00:01:55.569 [Pipeline] sh 00:01:55.858 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:55.871 [Pipeline] sh 00:01:56.157 + cat autorun-spdk.conf 00:01:56.157 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.157 SPDK_TEST_NVME=1 00:01:56.157 SPDK_TEST_FTL=1 00:01:56.157 SPDK_TEST_ISAL=1 00:01:56.157 SPDK_RUN_ASAN=1 00:01:56.157 SPDK_RUN_UBSAN=1 00:01:56.157 SPDK_TEST_XNVME=1 00:01:56.157 SPDK_TEST_NVME_FDP=1 00:01:56.157 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:56.157 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:56.157 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:56.166 RUN_NIGHTLY=1 00:01:56.169 [Pipeline] } 00:01:56.183 [Pipeline] // stage 00:01:56.198 [Pipeline] stage 00:01:56.200 [Pipeline] { (Run VM) 00:01:56.212 [Pipeline] sh 00:01:56.496 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:56.496 + echo 'Start stage prepare_nvme.sh' 00:01:56.496 Start stage prepare_nvme.sh 00:01:56.496 + [[ -n 9 ]] 00:01:56.496 + disk_prefix=ex9 00:01:56.496 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:56.496 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:56.496 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:56.496 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.496 ++ SPDK_TEST_NVME=1 00:01:56.496 ++ SPDK_TEST_FTL=1 00:01:56.496 ++ SPDK_TEST_ISAL=1 00:01:56.496 ++ SPDK_RUN_ASAN=1 00:01:56.496 ++ SPDK_RUN_UBSAN=1 00:01:56.496 ++ SPDK_TEST_XNVME=1 00:01:56.496 ++ SPDK_TEST_NVME_FDP=1 00:01:56.496 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:56.496 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:56.496 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:56.496 ++ RUN_NIGHTLY=1 00:01:56.496 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:56.496 + nvme_files=() 00:01:56.496 + declare -A nvme_files 00:01:56.496 + backend_dir=/var/lib/libvirt/images/backends 00:01:56.497 + nvme_files['nvme.img']=5G 00:01:56.497 + nvme_files['nvme-cmb.img']=5G 00:01:56.497 + nvme_files['nvme-multi0.img']=4G 00:01:56.497 + nvme_files['nvme-multi1.img']=4G 00:01:56.497 + nvme_files['nvme-multi2.img']=4G 00:01:56.497 + nvme_files['nvme-openstack.img']=8G 00:01:56.497 + nvme_files['nvme-zns.img']=5G 00:01:56.497 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:56.497 + (( SPDK_TEST_FTL == 1 )) 00:01:56.497 + nvme_files["nvme-ftl.img"]=6G 00:01:56.497 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:56.497 + nvme_files["nvme-fdp.img"]=1G 00:01:56.497 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:56.497 + for nvme in "${!nvme_files[@]}" 00:01:56.497 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:01:56.758 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:56.758 + for nvme in "${!nvme_files[@]}" 00:01:56.758 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-ftl.img -s 6G 00:01:57.702 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:57.703 + for nvme in "${!nvme_files[@]}" 00:01:57.703 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:01:57.703 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:57.703 + for nvme in "${!nvme_files[@]}" 00:01:57.703 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:01:57.703 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:57.703 + for nvme in "${!nvme_files[@]}" 00:01:57.703 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:01:57.703 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:57.703 + for nvme in "${!nvme_files[@]}" 00:01:57.703 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:01:58.278 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:58.278 + for nvme in "${!nvme_files[@]}" 00:01:58.278 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:01:58.540 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:58.540 + for nvme in "${!nvme_files[@]}" 00:01:58.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-fdp.img -s 1G 00:01:58.802 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:58.802 + for nvme in "${!nvme_files[@]}" 00:01:58.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:01:59.747 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:59.747 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:01:59.747 + echo 'End stage prepare_nvme.sh' 00:01:59.747 End stage prepare_nvme.sh 00:01:59.761 [Pipeline] sh 00:02:00.047 + DISTRO=fedora39 00:02:00.047 + CPUS=10 00:02:00.047 + RAM=12288 00:02:00.047 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:00.047 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex9-nvme.img -b /var/lib/libvirt/images/backends/ex9-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex9-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:00.047 00:02:00.047 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:00.047 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:00.047 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:00.047 HELP=0 00:02:00.047 DRY_RUN=0 00:02:00.048 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,/var/lib/libvirt/images/backends/ex9-nvme.img,/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,/var/lib/libvirt/images/backends/ex9-nvme-fdp.img, 00:02:00.048 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:00.048 NVME_AUTO_CREATE=0 00:02:00.048 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,, 00:02:00.048 NVME_CMB=,,,, 00:02:00.048 NVME_PMR=,,,, 00:02:00.048 NVME_ZNS=,,,, 00:02:00.048 NVME_MS=true,,,, 00:02:00.048 NVME_FDP=,,,on, 00:02:00.048 SPDK_VAGRANT_DISTRO=fedora39 00:02:00.048 SPDK_VAGRANT_VMCPU=10 00:02:00.048 SPDK_VAGRANT_VMRAM=12288 00:02:00.048 SPDK_VAGRANT_PROVIDER=libvirt 00:02:00.048 SPDK_VAGRANT_HTTP_PROXY= 00:02:00.048 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:00.048 SPDK_OPENSTACK_NETWORK=0 00:02:00.048 VAGRANT_PACKAGE_BOX=0 00:02:00.048 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:00.048 FORCE_DISTRO=true 00:02:00.048 VAGRANT_BOX_VERSION= 00:02:00.048 EXTRA_VAGRANTFILES= 00:02:00.048 NIC_MODEL=e1000 00:02:00.048 00:02:00.048 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:00.048 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:02.632 Bringing machine 'default' up with 'libvirt' provider... 00:02:02.918 ==> default: Creating image (snapshot of base box volume). 00:02:02.918 ==> default: Creating domain with the following settings... 00:02:02.918 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732146653_444b054efe5a2dd7c6a4 00:02:02.918 ==> default: -- Domain type: kvm 00:02:02.918 ==> default: -- Cpus: 10 00:02:02.918 ==> default: -- Feature: acpi 00:02:02.918 ==> default: -- Feature: apic 00:02:02.918 ==> default: -- Feature: pae 00:02:02.918 ==> default: -- Memory: 12288M 00:02:02.918 ==> default: -- Memory Backing: hugepages: 00:02:02.918 ==> default: -- Management MAC: 00:02:02.918 ==> default: -- Loader: 00:02:02.918 ==> default: -- Nvram: 00:02:02.918 ==> default: -- Base box: spdk/fedora39 00:02:02.918 ==> default: -- Storage pool: default 00:02:02.918 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732146653_444b054efe5a2dd7c6a4.img (20G) 00:02:02.918 ==> default: -- Volume Cache: default 00:02:02.918 ==> default: -- Kernel: 00:02:02.918 ==> default: -- Initrd: 00:02:02.918 ==> default: -- Graphics Type: vnc 00:02:02.918 ==> default: -- Graphics Port: -1 00:02:02.918 ==> default: -- Graphics IP: 127.0.0.1 00:02:02.918 ==> default: -- Graphics Password: Not defined 00:02:02.918 ==> default: -- Video Type: cirrus 00:02:02.918 ==> default: -- Video VRAM: 9216 00:02:02.918 ==> default: -- Sound Type: 00:02:02.918 ==> default: -- Keymap: en-us 00:02:02.918 ==> default: -- TPM Path: 00:02:02.918 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:02.918 ==> default: -- Command line args: 00:02:02.918 ==> default: -> value=-device, 00:02:02.918 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:02.918 ==> default: -> value=-drive, 00:02:02.918 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:02.918 ==> default: -> value=-device, 00:02:02.918 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:02.918 ==> default: -> value=-device, 00:02:02.918 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:02.918 ==> default: -> value=-drive, 00:02:02.918 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-1-drive0, 00:02:02.918 ==> default: -> value=-device, 00:02:02.918 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.918 ==> default: -> value=-device, 00:02:02.918 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:02.918 ==> default: -> value=-drive, 00:02:02.918 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:02.918 ==> default: -> value=-device, 00:02:02.918 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.918 ==> default: -> value=-drive, 00:02:02.918 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:02.918 ==> default: -> value=-device, 00:02:02.918 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.918 ==> default: -> value=-drive, 00:02:02.918 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:02.918 ==> default: -> value=-device, 00:02:02.918 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.918 ==> default: -> value=-device, 00:02:02.918 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:02.918 ==> default: -> value=-device, 00:02:02.918 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:02.918 ==> default: -> value=-drive, 00:02:02.918 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:02.918 ==> default: -> value=-device, 00:02:02.918 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:03.179 ==> default: Creating shared folders metadata... 00:02:03.179 ==> default: Starting domain. 00:02:05.083 ==> default: Waiting for domain to get an IP address... 00:02:23.198 ==> default: Waiting for SSH to become available... 00:02:23.198 ==> default: Configuring and enabling network interfaces... 00:02:25.744 default: SSH address: 192.168.121.46:22 00:02:25.744 default: SSH username: vagrant 00:02:25.744 default: SSH auth method: private key 00:02:27.659 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:35.808 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:42.430 ==> default: Mounting SSHFS shared folder... 00:02:43.411 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:43.411 ==> default: Checking Mount.. 00:02:44.801 ==> default: Folder Successfully Mounted! 00:02:44.801 00:02:44.801 SUCCESS! 00:02:44.801 00:02:44.801 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:44.801 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:44.801 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:44.801 00:02:44.812 [Pipeline] } 00:02:44.829 [Pipeline] // stage 00:02:44.838 [Pipeline] dir 00:02:44.839 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:44.841 [Pipeline] { 00:02:44.854 [Pipeline] catchError 00:02:44.855 [Pipeline] { 00:02:44.868 [Pipeline] sh 00:02:45.154 + vagrant ssh-config --host vagrant 00:02:45.154 + sed -ne '/^Host/,$p' 00:02:45.154 + tee ssh_conf 00:02:48.458 Host vagrant 00:02:48.458 HostName 192.168.121.46 00:02:48.458 User vagrant 00:02:48.458 Port 22 00:02:48.458 UserKnownHostsFile /dev/null 00:02:48.458 StrictHostKeyChecking no 00:02:48.458 PasswordAuthentication no 00:02:48.458 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:48.458 IdentitiesOnly yes 00:02:48.458 LogLevel FATAL 00:02:48.458 ForwardAgent yes 00:02:48.458 ForwardX11 yes 00:02:48.458 00:02:48.474 [Pipeline] withEnv 00:02:48.476 [Pipeline] { 00:02:48.490 [Pipeline] sh 00:02:48.774 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:48.774 source /etc/os-release 00:02:48.774 [[ -e /image.version ]] && img=$(< /image.version) 00:02:48.774 # Minimal, systemd-like check. 00:02:48.774 if [[ -e /.dockerenv ]]; then 00:02:48.774 # Clear garbage from the node'\''s name: 00:02:48.774 # agt-er_autotest_547-896 -> autotest_547-896 00:02:48.774 # $HOSTNAME is the actual container id 00:02:48.774 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:48.774 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:48.774 # We can assume this is a mount from a host where container is running, 00:02:48.774 # so fetch its hostname to easily identify the target swarm worker. 00:02:48.774 container="$(< /etc/hostname) ($agent)" 00:02:48.774 else 00:02:48.774 # Fallback 00:02:48.774 container=$agent 00:02:48.774 fi 00:02:48.774 fi 00:02:48.774 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:48.774 ' 00:02:49.060 [Pipeline] } 00:02:49.071 [Pipeline] // withEnv 00:02:49.077 [Pipeline] setCustomBuildProperty 00:02:49.087 [Pipeline] stage 00:02:49.088 [Pipeline] { (Tests) 00:02:49.099 [Pipeline] sh 00:02:49.376 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:49.655 [Pipeline] sh 00:02:49.941 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:50.221 [Pipeline] timeout 00:02:50.221 Timeout set to expire in 50 min 00:02:50.223 [Pipeline] { 00:02:50.240 [Pipeline] sh 00:02:50.529 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:51.103 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:51.118 [Pipeline] sh 00:02:51.406 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:51.683 [Pipeline] sh 00:02:52.030 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:52.072 [Pipeline] sh 00:02:52.356 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:52.618 ++ readlink -f spdk_repo 00:02:52.618 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:52.618 + [[ -n /home/vagrant/spdk_repo ]] 00:02:52.618 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:52.618 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:52.618 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:52.618 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:52.618 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:52.618 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:52.618 + cd /home/vagrant/spdk_repo 00:02:52.618 + source /etc/os-release 00:02:52.618 ++ NAME='Fedora Linux' 00:02:52.618 ++ VERSION='39 (Cloud Edition)' 00:02:52.618 ++ ID=fedora 00:02:52.618 ++ VERSION_ID=39 00:02:52.618 ++ VERSION_CODENAME= 00:02:52.618 ++ PLATFORM_ID=platform:f39 00:02:52.618 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:52.618 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:52.618 ++ LOGO=fedora-logo-icon 00:02:52.618 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:52.618 ++ HOME_URL=https://fedoraproject.org/ 00:02:52.618 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:52.618 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:52.618 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:52.618 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:52.618 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:52.618 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:52.618 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:52.618 ++ SUPPORT_END=2024-11-12 00:02:52.618 ++ VARIANT='Cloud Edition' 00:02:52.618 ++ VARIANT_ID=cloud 00:02:52.618 + uname -a 00:02:52.618 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:52.618 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:52.879 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:53.141 Hugepages 00:02:53.141 node hugesize free / total 00:02:53.141 node0 1048576kB 0 / 0 00:02:53.141 node0 2048kB 0 / 0 00:02:53.141 00:02:53.141 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:53.141 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:53.141 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:53.141 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:53.404 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:53.404 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:53.404 + rm -f /tmp/spdk-ld-path 00:02:53.404 + source autorun-spdk.conf 00:02:53.404 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.404 ++ SPDK_TEST_NVME=1 00:02:53.404 ++ SPDK_TEST_FTL=1 00:02:53.404 ++ SPDK_TEST_ISAL=1 00:02:53.404 ++ SPDK_RUN_ASAN=1 00:02:53.404 ++ SPDK_RUN_UBSAN=1 00:02:53.404 ++ SPDK_TEST_XNVME=1 00:02:53.404 ++ SPDK_TEST_NVME_FDP=1 00:02:53.404 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:53.404 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:53.404 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:53.404 ++ RUN_NIGHTLY=1 00:02:53.404 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:53.404 + [[ -n '' ]] 00:02:53.404 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:53.404 + for M in /var/spdk/build-*-manifest.txt 00:02:53.404 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:53.404 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.404 + for M in /var/spdk/build-*-manifest.txt 00:02:53.404 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:53.404 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.404 + for M in /var/spdk/build-*-manifest.txt 00:02:53.404 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:53.404 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.404 ++ uname 00:02:53.404 + [[ Linux == \L\i\n\u\x ]] 00:02:53.404 + sudo dmesg -T 00:02:53.404 + sudo dmesg --clear 00:02:53.404 + dmesg_pid=5763 00:02:53.404 + [[ Fedora Linux == FreeBSD ]] 00:02:53.404 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.404 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.404 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:53.404 + [[ -x /usr/src/fio-static/fio ]] 00:02:53.404 + sudo dmesg -Tw 00:02:53.404 + export FIO_BIN=/usr/src/fio-static/fio 00:02:53.404 + FIO_BIN=/usr/src/fio-static/fio 00:02:53.404 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:53.404 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:53.404 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:53.404 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.404 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.404 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:53.404 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.404 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.404 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.404 Test configuration: 00:02:53.404 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.404 SPDK_TEST_NVME=1 00:02:53.404 SPDK_TEST_FTL=1 00:02:53.404 SPDK_TEST_ISAL=1 00:02:53.404 SPDK_RUN_ASAN=1 00:02:53.404 SPDK_RUN_UBSAN=1 00:02:53.404 SPDK_TEST_XNVME=1 00:02:53.404 SPDK_TEST_NVME_FDP=1 00:02:53.404 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:53.404 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:53.404 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:53.404 RUN_NIGHTLY=1 23:51:43 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:53.404 23:51:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:53.404 23:51:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:53.404 23:51:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:53.404 23:51:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:53.404 23:51:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:53.404 23:51:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.404 23:51:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.404 23:51:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.404 23:51:43 -- paths/export.sh@5 -- $ export PATH 00:02:53.404 23:51:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.404 23:51:43 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:53.404 23:51:43 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:53.666 23:51:43 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732146703.XXXXXX 00:02:53.666 23:51:43 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732146703.eYfbZD 00:02:53.666 23:51:43 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:53.666 23:51:43 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:02:53.666 23:51:43 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:53.666 23:51:43 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:53.666 23:51:43 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:53.666 23:51:43 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:53.666 23:51:43 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:53.666 23:51:43 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:53.666 23:51:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.666 23:51:43 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:53.666 23:51:43 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:53.666 23:51:43 -- pm/common@17 -- $ local monitor 00:02:53.666 23:51:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.666 23:51:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.666 23:51:43 -- pm/common@25 -- $ sleep 1 00:02:53.666 23:51:43 -- pm/common@21 -- $ date +%s 00:02:53.666 23:51:43 -- pm/common@21 -- $ date +%s 00:02:53.666 23:51:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732146703 00:02:53.666 23:51:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732146703 00:02:53.666 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732146703_collect-cpu-load.pm.log 00:02:53.666 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732146703_collect-vmstat.pm.log 00:02:54.610 23:51:44 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:54.610 23:51:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:54.610 23:51:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:54.610 23:51:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:54.610 23:51:44 -- spdk/autobuild.sh@16 -- $ date -u 00:02:54.610 Wed Nov 20 11:51:44 PM UTC 2024 00:02:54.610 23:51:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:54.610 v24.09-1-gb18e1bd62 00:02:54.610 23:51:44 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:54.610 23:51:44 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:54.610 23:51:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:54.610 23:51:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:54.610 23:51:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.610 ************************************ 00:02:54.610 START TEST asan 00:02:54.610 ************************************ 00:02:54.610 using asan 00:02:54.610 23:51:44 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:54.610 00:02:54.610 real 0m0.000s 00:02:54.610 user 0m0.000s 00:02:54.610 sys 0m0.000s 00:02:54.610 ************************************ 00:02:54.610 END TEST asan 00:02:54.610 ************************************ 00:02:54.610 23:51:44 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:54.610 23:51:44 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:54.610 23:51:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:54.610 23:51:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:54.610 23:51:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:54.611 23:51:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:54.611 23:51:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.611 ************************************ 00:02:54.611 START TEST ubsan 00:02:54.611 ************************************ 00:02:54.611 using ubsan 00:02:54.611 23:51:44 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:54.611 00:02:54.611 real 0m0.000s 00:02:54.611 user 0m0.000s 00:02:54.611 sys 0m0.000s 00:02:54.611 23:51:44 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:54.611 ************************************ 00:02:54.611 END TEST ubsan 00:02:54.611 ************************************ 00:02:54.611 23:51:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:54.611 23:51:45 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:54.611 23:51:45 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:54.611 23:51:45 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:54.611 23:51:45 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:54.611 23:51:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:54.611 23:51:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.611 ************************************ 00:02:54.611 START TEST build_native_dpdk 00:02:54.611 ************************************ 00:02:54.611 23:51:45 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:54.611 23:51:45 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:54.611 23:51:45 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:54.611 23:51:45 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:54.611 23:51:45 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:54.611 23:51:45 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:54.611 23:51:45 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:54.611 23:51:45 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:54.611 23:51:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:54.611 23:51:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:54.611 23:51:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:54.611 23:51:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:54.611 23:51:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:54.873 caf0f5d395 version: 22.11.4 00:02:54.873 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:54.873 dc9c799c7d vhost: fix missing spinlock unlock 00:02:54.873 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:54.873 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:54.873 23:51:45 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:54.873 23:51:45 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:54.873 23:51:45 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:54.873 23:51:45 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:54.873 23:51:45 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:54.873 23:51:45 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:54.873 23:51:45 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:54.873 23:51:45 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:54.873 23:51:45 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:54.873 23:51:45 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:54.873 23:51:45 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:54.873 23:51:45 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:54.874 23:51:45 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:54.874 patching file config/rte_config.h 00:02:54.874 Hunk #1 succeeded at 60 (offset 1 line). 00:02:54.874 23:51:45 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:54.874 23:51:45 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:54.874 patching file lib/pcapng/rte_pcapng.c 00:02:54.874 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:54.874 23:51:45 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:54.874 23:51:45 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:54.874 23:51:45 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:54.874 23:51:45 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:54.874 23:51:45 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:54.874 23:51:45 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:54.874 23:51:45 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:59.141 The Meson build system 00:02:59.141 Version: 1.5.0 00:02:59.141 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:59.141 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:59.141 Build type: native build 00:02:59.141 Program cat found: YES (/usr/bin/cat) 00:02:59.141 Project name: DPDK 00:02:59.141 Project version: 22.11.4 00:02:59.141 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:59.141 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:59.141 Host machine cpu family: x86_64 00:02:59.141 Host machine cpu: x86_64 00:02:59.141 Message: ## Building in Developer Mode ## 00:02:59.141 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:59.141 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:59.141 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:59.141 Program objdump found: YES (/usr/bin/objdump) 00:02:59.142 Program python3 found: YES (/usr/bin/python3) 00:02:59.142 Program cat found: YES (/usr/bin/cat) 00:02:59.142 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:59.142 Checking for size of "void *" : 8 00:02:59.142 Checking for size of "void *" : 8 (cached) 00:02:59.142 Library m found: YES 00:02:59.142 Library numa found: YES 00:02:59.142 Has header "numaif.h" : YES 00:02:59.142 Library fdt found: NO 00:02:59.142 Library execinfo found: NO 00:02:59.142 Has header "execinfo.h" : YES 00:02:59.142 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:59.142 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:59.142 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:59.142 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:59.142 Run-time dependency openssl found: YES 3.1.1 00:02:59.142 Run-time dependency libpcap found: YES 1.10.4 00:02:59.142 Has header "pcap.h" with dependency libpcap: YES 00:02:59.142 Compiler for C supports arguments -Wcast-qual: YES 00:02:59.142 Compiler for C supports arguments -Wdeprecated: YES 00:02:59.142 Compiler for C supports arguments -Wformat: YES 00:02:59.142 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:59.142 Compiler for C supports arguments -Wformat-security: NO 00:02:59.142 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:59.142 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:59.142 Compiler for C supports arguments -Wnested-externs: YES 00:02:59.142 Compiler for C supports arguments -Wold-style-definition: YES 00:02:59.142 Compiler for C supports arguments -Wpointer-arith: YES 00:02:59.142 Compiler for C supports arguments -Wsign-compare: YES 00:02:59.142 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:59.142 Compiler for C supports arguments -Wundef: YES 00:02:59.142 Compiler for C supports arguments -Wwrite-strings: YES 00:02:59.142 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:59.142 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:59.142 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:59.142 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:59.142 Compiler for C supports arguments -mavx512f: YES 00:02:59.142 Checking if "AVX512 checking" compiles: YES 00:02:59.142 Fetching value of define "__SSE4_2__" : 1 00:02:59.142 Fetching value of define "__AES__" : 1 00:02:59.142 Fetching value of define "__AVX__" : 1 00:02:59.142 Fetching value of define "__AVX2__" : 1 00:02:59.142 Fetching value of define "__AVX512BW__" : 1 00:02:59.142 Fetching value of define "__AVX512CD__" : 1 00:02:59.142 Fetching value of define "__AVX512DQ__" : 1 00:02:59.142 Fetching value of define "__AVX512F__" : 1 00:02:59.142 Fetching value of define "__AVX512VL__" : 1 00:02:59.142 Fetching value of define "__PCLMUL__" : 1 00:02:59.142 Fetching value of define "__RDRND__" : 1 00:02:59.142 Fetching value of define "__RDSEED__" : 1 00:02:59.142 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:59.142 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:59.142 Message: lib/kvargs: Defining dependency "kvargs" 00:02:59.142 Message: lib/telemetry: Defining dependency "telemetry" 00:02:59.142 Checking for function "getentropy" : YES 00:02:59.142 Message: lib/eal: Defining dependency "eal" 00:02:59.142 Message: lib/ring: Defining dependency "ring" 00:02:59.142 Message: lib/rcu: Defining dependency "rcu" 00:02:59.142 Message: lib/mempool: Defining dependency "mempool" 00:02:59.142 Message: lib/mbuf: Defining dependency "mbuf" 00:02:59.142 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:59.142 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.142 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:59.142 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:59.142 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:59.142 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:59.142 Compiler for C supports arguments -mpclmul: YES 00:02:59.142 Compiler for C supports arguments -maes: YES 00:02:59.142 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:59.142 Compiler for C supports arguments -mavx512bw: YES 00:02:59.142 Compiler for C supports arguments -mavx512dq: YES 00:02:59.142 Compiler for C supports arguments -mavx512vl: YES 00:02:59.142 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:59.142 Compiler for C supports arguments -mavx2: YES 00:02:59.142 Compiler for C supports arguments -mavx: YES 00:02:59.142 Message: lib/net: Defining dependency "net" 00:02:59.142 Message: lib/meter: Defining dependency "meter" 00:02:59.142 Message: lib/ethdev: Defining dependency "ethdev" 00:02:59.142 Message: lib/pci: Defining dependency "pci" 00:02:59.142 Message: lib/cmdline: Defining dependency "cmdline" 00:02:59.142 Message: lib/metrics: Defining dependency "metrics" 00:02:59.142 Message: lib/hash: Defining dependency "hash" 00:02:59.142 Message: lib/timer: Defining dependency "timer" 00:02:59.142 Fetching value of define "__AVX2__" : 1 (cached) 00:02:59.142 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.142 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:59.142 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:59.142 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:59.142 Message: lib/acl: Defining dependency "acl" 00:02:59.142 Message: lib/bbdev: Defining dependency "bbdev" 00:02:59.142 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:59.142 Run-time dependency libelf found: YES 0.191 00:02:59.142 Message: lib/bpf: Defining dependency "bpf" 00:02:59.142 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:59.142 Message: lib/compressdev: Defining dependency "compressdev" 00:02:59.142 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:59.142 Message: lib/distributor: Defining dependency "distributor" 00:02:59.142 Message: lib/efd: Defining dependency "efd" 00:02:59.142 Message: lib/eventdev: Defining dependency "eventdev" 00:02:59.142 Message: lib/gpudev: Defining dependency "gpudev" 00:02:59.142 Message: lib/gro: Defining dependency "gro" 00:02:59.142 Message: lib/gso: Defining dependency "gso" 00:02:59.142 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:59.142 Message: lib/jobstats: Defining dependency "jobstats" 00:02:59.142 Message: lib/latencystats: Defining dependency "latencystats" 00:02:59.142 Message: lib/lpm: Defining dependency "lpm" 00:02:59.142 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.142 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:59.142 Fetching value of define "__AVX512IFMA__" : 1 00:02:59.142 Message: lib/member: Defining dependency "member" 00:02:59.142 Message: lib/pcapng: Defining dependency "pcapng" 00:02:59.142 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:59.142 Message: lib/power: Defining dependency "power" 00:02:59.142 Message: lib/rawdev: Defining dependency "rawdev" 00:02:59.142 Message: lib/regexdev: Defining dependency "regexdev" 00:02:59.142 Message: lib/dmadev: Defining dependency "dmadev" 00:02:59.142 Message: lib/rib: Defining dependency "rib" 00:02:59.142 Message: lib/reorder: Defining dependency "reorder" 00:02:59.142 Message: lib/sched: Defining dependency "sched" 00:02:59.142 Message: lib/security: Defining dependency "security" 00:02:59.142 Message: lib/stack: Defining dependency "stack" 00:02:59.142 Has header "linux/userfaultfd.h" : YES 00:02:59.142 Message: lib/vhost: Defining dependency "vhost" 00:02:59.142 Message: lib/ipsec: Defining dependency "ipsec" 00:02:59.142 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.142 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:59.142 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:59.142 Message: lib/fib: Defining dependency "fib" 00:02:59.142 Message: lib/port: Defining dependency "port" 00:02:59.142 Message: lib/pdump: Defining dependency "pdump" 00:02:59.142 Message: lib/table: Defining dependency "table" 00:02:59.142 Message: lib/pipeline: Defining dependency "pipeline" 00:02:59.142 Message: lib/graph: Defining dependency "graph" 00:02:59.142 Message: lib/node: Defining dependency "node" 00:02:59.142 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:59.142 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:59.142 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:59.142 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:59.142 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:59.143 Compiler for C supports arguments -Wno-unused-value: YES 00:02:59.143 Compiler for C supports arguments -Wno-format: YES 00:02:59.143 Compiler for C supports arguments -Wno-format-security: YES 00:02:59.143 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:59.143 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:59.143 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:59.143 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:00.533 Fetching value of define "__AVX2__" : 1 (cached) 00:03:00.533 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:00.533 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:00.533 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:00.533 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:00.533 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:00.533 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:00.533 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:00.533 Configuring doxy-api.conf using configuration 00:03:00.533 Program sphinx-build found: NO 00:03:00.533 Configuring rte_build_config.h using configuration 00:03:00.533 Message: 00:03:00.533 ================= 00:03:00.533 Applications Enabled 00:03:00.533 ================= 00:03:00.533 00:03:00.533 apps: 00:03:00.533 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:00.533 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:00.533 test-security-perf, 00:03:00.533 00:03:00.533 Message: 00:03:00.533 ================= 00:03:00.533 Libraries Enabled 00:03:00.533 ================= 00:03:00.533 00:03:00.533 libs: 00:03:00.533 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:00.533 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:00.533 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:00.533 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:00.533 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:00.533 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:00.533 table, pipeline, graph, node, 00:03:00.533 00:03:00.533 Message: 00:03:00.533 =============== 00:03:00.533 Drivers Enabled 00:03:00.533 =============== 00:03:00.533 00:03:00.533 common: 00:03:00.533 00:03:00.533 bus: 00:03:00.533 pci, vdev, 00:03:00.533 mempool: 00:03:00.533 ring, 00:03:00.533 dma: 00:03:00.533 00:03:00.533 net: 00:03:00.533 i40e, 00:03:00.533 raw: 00:03:00.533 00:03:00.533 crypto: 00:03:00.533 00:03:00.533 compress: 00:03:00.533 00:03:00.533 regex: 00:03:00.533 00:03:00.533 vdpa: 00:03:00.533 00:03:00.533 event: 00:03:00.533 00:03:00.533 baseband: 00:03:00.533 00:03:00.533 gpu: 00:03:00.533 00:03:00.533 00:03:00.533 Message: 00:03:00.533 ================= 00:03:00.533 Content Skipped 00:03:00.533 ================= 00:03:00.533 00:03:00.533 apps: 00:03:00.533 00:03:00.533 libs: 00:03:00.533 kni: explicitly disabled via build config (deprecated lib) 00:03:00.533 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:00.533 00:03:00.533 drivers: 00:03:00.533 common/cpt: not in enabled drivers build config 00:03:00.533 common/dpaax: not in enabled drivers build config 00:03:00.533 common/iavf: not in enabled drivers build config 00:03:00.533 common/idpf: not in enabled drivers build config 00:03:00.533 common/mvep: not in enabled drivers build config 00:03:00.533 common/octeontx: not in enabled drivers build config 00:03:00.533 bus/auxiliary: not in enabled drivers build config 00:03:00.533 bus/dpaa: not in enabled drivers build config 00:03:00.533 bus/fslmc: not in enabled drivers build config 00:03:00.533 bus/ifpga: not in enabled drivers build config 00:03:00.533 bus/vmbus: not in enabled drivers build config 00:03:00.533 common/cnxk: not in enabled drivers build config 00:03:00.533 common/mlx5: not in enabled drivers build config 00:03:00.533 common/qat: not in enabled drivers build config 00:03:00.533 common/sfc_efx: not in enabled drivers build config 00:03:00.533 mempool/bucket: not in enabled drivers build config 00:03:00.533 mempool/cnxk: not in enabled drivers build config 00:03:00.533 mempool/dpaa: not in enabled drivers build config 00:03:00.533 mempool/dpaa2: not in enabled drivers build config 00:03:00.533 mempool/octeontx: not in enabled drivers build config 00:03:00.533 mempool/stack: not in enabled drivers build config 00:03:00.533 dma/cnxk: not in enabled drivers build config 00:03:00.533 dma/dpaa: not in enabled drivers build config 00:03:00.533 dma/dpaa2: not in enabled drivers build config 00:03:00.533 dma/hisilicon: not in enabled drivers build config 00:03:00.533 dma/idxd: not in enabled drivers build config 00:03:00.533 dma/ioat: not in enabled drivers build config 00:03:00.533 dma/skeleton: not in enabled drivers build config 00:03:00.533 net/af_packet: not in enabled drivers build config 00:03:00.533 net/af_xdp: not in enabled drivers build config 00:03:00.533 net/ark: not in enabled drivers build config 00:03:00.533 net/atlantic: not in enabled drivers build config 00:03:00.533 net/avp: not in enabled drivers build config 00:03:00.533 net/axgbe: not in enabled drivers build config 00:03:00.533 net/bnx2x: not in enabled drivers build config 00:03:00.533 net/bnxt: not in enabled drivers build config 00:03:00.533 net/bonding: not in enabled drivers build config 00:03:00.533 net/cnxk: not in enabled drivers build config 00:03:00.533 net/cxgbe: not in enabled drivers build config 00:03:00.533 net/dpaa: not in enabled drivers build config 00:03:00.533 net/dpaa2: not in enabled drivers build config 00:03:00.533 net/e1000: not in enabled drivers build config 00:03:00.533 net/ena: not in enabled drivers build config 00:03:00.533 net/enetc: not in enabled drivers build config 00:03:00.533 net/enetfec: not in enabled drivers build config 00:03:00.533 net/enic: not in enabled drivers build config 00:03:00.533 net/failsafe: not in enabled drivers build config 00:03:00.533 net/fm10k: not in enabled drivers build config 00:03:00.533 net/gve: not in enabled drivers build config 00:03:00.533 net/hinic: not in enabled drivers build config 00:03:00.533 net/hns3: not in enabled drivers build config 00:03:00.533 net/iavf: not in enabled drivers build config 00:03:00.533 net/ice: not in enabled drivers build config 00:03:00.533 net/idpf: not in enabled drivers build config 00:03:00.533 net/igc: not in enabled drivers build config 00:03:00.533 net/ionic: not in enabled drivers build config 00:03:00.533 net/ipn3ke: not in enabled drivers build config 00:03:00.533 net/ixgbe: not in enabled drivers build config 00:03:00.533 net/kni: not in enabled drivers build config 00:03:00.533 net/liquidio: not in enabled drivers build config 00:03:00.533 net/mana: not in enabled drivers build config 00:03:00.533 net/memif: not in enabled drivers build config 00:03:00.533 net/mlx4: not in enabled drivers build config 00:03:00.533 net/mlx5: not in enabled drivers build config 00:03:00.533 net/mvneta: not in enabled drivers build config 00:03:00.533 net/mvpp2: not in enabled drivers build config 00:03:00.533 net/netvsc: not in enabled drivers build config 00:03:00.533 net/nfb: not in enabled drivers build config 00:03:00.533 net/nfp: not in enabled drivers build config 00:03:00.533 net/ngbe: not in enabled drivers build config 00:03:00.533 net/null: not in enabled drivers build config 00:03:00.533 net/octeontx: not in enabled drivers build config 00:03:00.533 net/octeon_ep: not in enabled drivers build config 00:03:00.533 net/pcap: not in enabled drivers build config 00:03:00.533 net/pfe: not in enabled drivers build config 00:03:00.533 net/qede: not in enabled drivers build config 00:03:00.534 net/ring: not in enabled drivers build config 00:03:00.534 net/sfc: not in enabled drivers build config 00:03:00.534 net/softnic: not in enabled drivers build config 00:03:00.534 net/tap: not in enabled drivers build config 00:03:00.534 net/thunderx: not in enabled drivers build config 00:03:00.534 net/txgbe: not in enabled drivers build config 00:03:00.534 net/vdev_netvsc: not in enabled drivers build config 00:03:00.534 net/vhost: not in enabled drivers build config 00:03:00.534 net/virtio: not in enabled drivers build config 00:03:00.534 net/vmxnet3: not in enabled drivers build config 00:03:00.534 raw/cnxk_bphy: not in enabled drivers build config 00:03:00.534 raw/cnxk_gpio: not in enabled drivers build config 00:03:00.534 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:00.534 raw/ifpga: not in enabled drivers build config 00:03:00.534 raw/ntb: not in enabled drivers build config 00:03:00.534 raw/skeleton: not in enabled drivers build config 00:03:00.534 crypto/armv8: not in enabled drivers build config 00:03:00.534 crypto/bcmfs: not in enabled drivers build config 00:03:00.534 crypto/caam_jr: not in enabled drivers build config 00:03:00.534 crypto/ccp: not in enabled drivers build config 00:03:00.534 crypto/cnxk: not in enabled drivers build config 00:03:00.534 crypto/dpaa_sec: not in enabled drivers build config 00:03:00.534 crypto/dpaa2_sec: not in enabled drivers build config 00:03:00.534 crypto/ipsec_mb: not in enabled drivers build config 00:03:00.534 crypto/mlx5: not in enabled drivers build config 00:03:00.534 crypto/mvsam: not in enabled drivers build config 00:03:00.534 crypto/nitrox: not in enabled drivers build config 00:03:00.534 crypto/null: not in enabled drivers build config 00:03:00.534 crypto/octeontx: not in enabled drivers build config 00:03:00.534 crypto/openssl: not in enabled drivers build config 00:03:00.534 crypto/scheduler: not in enabled drivers build config 00:03:00.534 crypto/uadk: not in enabled drivers build config 00:03:00.534 crypto/virtio: not in enabled drivers build config 00:03:00.534 compress/isal: not in enabled drivers build config 00:03:00.534 compress/mlx5: not in enabled drivers build config 00:03:00.534 compress/octeontx: not in enabled drivers build config 00:03:00.534 compress/zlib: not in enabled drivers build config 00:03:00.534 regex/mlx5: not in enabled drivers build config 00:03:00.534 regex/cn9k: not in enabled drivers build config 00:03:00.534 vdpa/ifc: not in enabled drivers build config 00:03:00.534 vdpa/mlx5: not in enabled drivers build config 00:03:00.534 vdpa/sfc: not in enabled drivers build config 00:03:00.534 event/cnxk: not in enabled drivers build config 00:03:00.534 event/dlb2: not in enabled drivers build config 00:03:00.534 event/dpaa: not in enabled drivers build config 00:03:00.534 event/dpaa2: not in enabled drivers build config 00:03:00.534 event/dsw: not in enabled drivers build config 00:03:00.534 event/opdl: not in enabled drivers build config 00:03:00.534 event/skeleton: not in enabled drivers build config 00:03:00.534 event/sw: not in enabled drivers build config 00:03:00.534 event/octeontx: not in enabled drivers build config 00:03:00.534 baseband/acc: not in enabled drivers build config 00:03:00.534 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:00.534 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:00.534 baseband/la12xx: not in enabled drivers build config 00:03:00.534 baseband/null: not in enabled drivers build config 00:03:00.534 baseband/turbo_sw: not in enabled drivers build config 00:03:00.534 gpu/cuda: not in enabled drivers build config 00:03:00.534 00:03:00.534 00:03:00.534 Build targets in project: 309 00:03:00.534 00:03:00.534 DPDK 22.11.4 00:03:00.534 00:03:00.534 User defined options 00:03:00.534 libdir : lib 00:03:00.534 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:00.534 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:00.534 c_link_args : 00:03:00.534 enable_docs : false 00:03:00.534 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:00.534 enable_kmods : false 00:03:00.534 machine : native 00:03:00.534 tests : false 00:03:00.534 00:03:00.534 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:00.534 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:00.796 23:51:50 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:00.796 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:00.796 [1/738] Generating lib/rte_kvargs_mingw with a custom command 00:03:00.796 [2/738] Generating lib/rte_kvargs_def with a custom command 00:03:00.796 [3/738] Generating lib/rte_telemetry_def with a custom command 00:03:00.796 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:03:00.796 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:00.796 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:00.796 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:00.796 [8/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:00.796 [9/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:01.057 [10/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:01.057 [11/738] Linking static target lib/librte_kvargs.a 00:03:01.057 [12/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:01.057 [13/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:01.057 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:01.057 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:01.057 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:01.057 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:01.057 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:01.057 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:01.057 [20/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:01.057 [21/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:01.057 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:01.057 [23/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.318 [24/738] Linking target lib/librte_kvargs.so.23.0 00:03:01.318 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:01.318 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:01.318 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:01.318 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:01.318 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:01.318 [30/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:01.318 [31/738] Linking static target lib/librte_telemetry.a 00:03:01.318 [32/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:01.318 [33/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:01.318 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:01.318 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:01.318 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:01.318 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:01.579 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:01.579 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:01.579 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:01.579 [41/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:01.579 [42/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.579 [43/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:01.841 [44/738] Linking target lib/librte_telemetry.so.23.0 00:03:01.841 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:01.841 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:01.841 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:01.841 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:01.841 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:01.841 [50/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:01.841 [51/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:01.841 [52/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:01.841 [53/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:01.841 [54/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:01.841 [55/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:01.841 [56/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:01.841 [57/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:01.841 [58/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:01.841 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:01.841 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:01.841 [61/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:01.841 [62/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:01.841 [63/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:02.102 [64/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:02.102 [65/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:02.102 [66/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:02.102 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:02.102 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:02.102 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:02.102 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:02.102 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:02.102 [72/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:02.102 [73/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:02.102 [74/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:02.102 [75/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:02.102 [76/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:02.102 [77/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:02.102 [78/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:02.102 [79/738] Generating lib/rte_eal_def with a custom command 00:03:02.102 [80/738] Generating lib/rte_eal_mingw with a custom command 00:03:02.102 [81/738] Generating lib/rte_ring_def with a custom command 00:03:02.102 [82/738] Generating lib/rte_ring_mingw with a custom command 00:03:02.363 [83/738] Generating lib/rte_rcu_def with a custom command 00:03:02.363 [84/738] Generating lib/rte_rcu_mingw with a custom command 00:03:02.363 [85/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:02.363 [86/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:02.363 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:02.363 [88/738] Linking static target lib/librte_ring.a 00:03:02.363 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:02.363 [90/738] Generating lib/rte_mempool_def with a custom command 00:03:02.363 [91/738] Generating lib/rte_mempool_mingw with a custom command 00:03:02.363 [92/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:02.363 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:02.624 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.624 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:02.624 [96/738] Generating lib/rte_mbuf_def with a custom command 00:03:02.624 [97/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:02.624 [98/738] Generating lib/rte_mbuf_mingw with a custom command 00:03:02.624 [99/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:02.624 [100/738] Linking static target lib/librte_eal.a 00:03:02.624 [101/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:02.885 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:02.885 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:02.885 [104/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:02.885 [105/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:02.885 [106/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:02.885 [107/738] Linking static target lib/librte_rcu.a 00:03:02.885 [108/738] Generating lib/rte_net_def with a custom command 00:03:02.885 [109/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:02.885 [110/738] Linking static target lib/librte_mempool.a 00:03:02.885 [111/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:02.885 [112/738] Generating lib/rte_net_mingw with a custom command 00:03:02.885 [113/738] Generating lib/rte_meter_def with a custom command 00:03:03.148 [114/738] Generating lib/rte_meter_mingw with a custom command 00:03:03.148 [115/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:03.148 [116/738] Linking static target lib/librte_meter.a 00:03:03.148 [117/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:03.148 [118/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:03.148 [119/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:03.148 [120/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.148 [121/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:03.148 [122/738] Linking static target lib/librte_net.a 00:03:03.148 [123/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.409 [124/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.409 [125/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:03.409 [126/738] Linking static target lib/librte_mbuf.a 00:03:03.409 [127/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:03.409 [128/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:03.409 [129/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.670 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:03.670 [131/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:03.670 [132/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:03.670 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:03.670 [134/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.931 [135/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:03.931 [136/738] Generating lib/rte_ethdev_def with a custom command 00:03:03.931 [137/738] Generating lib/rte_ethdev_mingw with a custom command 00:03:03.931 [138/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:03.931 [139/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:03.931 [140/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:03.931 [141/738] Generating lib/rte_pci_def with a custom command 00:03:03.931 [142/738] Linking static target lib/librte_pci.a 00:03:03.931 [143/738] Generating lib/rte_pci_mingw with a custom command 00:03:03.931 [144/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:03.931 [145/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:04.193 [146/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.193 [147/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:04.193 [148/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:04.193 [149/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:04.193 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:04.193 [151/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:04.193 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:04.193 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:04.193 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:04.193 [155/738] Generating lib/rte_cmdline_def with a custom command 00:03:04.193 [156/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:04.193 [157/738] Generating lib/rte_cmdline_mingw with a custom command 00:03:04.193 [158/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:04.193 [159/738] Generating lib/rte_metrics_def with a custom command 00:03:04.193 [160/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:04.193 [161/738] Generating lib/rte_metrics_mingw with a custom command 00:03:04.454 [162/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:04.454 [163/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:04.454 [164/738] Generating lib/rte_hash_def with a custom command 00:03:04.454 [165/738] Generating lib/rte_hash_mingw with a custom command 00:03:04.454 [166/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:04.454 [167/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:04.454 [168/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:04.454 [169/738] Generating lib/rte_timer_def with a custom command 00:03:04.454 [170/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:04.454 [171/738] Generating lib/rte_timer_mingw with a custom command 00:03:04.454 [172/738] Linking static target lib/librte_cmdline.a 00:03:04.715 [173/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:04.715 [174/738] Linking static target lib/librte_metrics.a 00:03:04.715 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:04.715 [176/738] Linking static target lib/librte_timer.a 00:03:04.976 [177/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:04.976 [178/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:04.976 [179/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.976 [180/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.976 [181/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:04.976 [182/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:04.976 [183/738] Linking static target lib/librte_ethdev.a 00:03:05.237 [184/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.237 [185/738] Generating lib/rte_acl_def with a custom command 00:03:05.237 [186/738] Generating lib/rte_acl_mingw with a custom command 00:03:05.237 [187/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:05.237 [188/738] Generating lib/rte_bbdev_def with a custom command 00:03:05.237 [189/738] Generating lib/rte_bbdev_mingw with a custom command 00:03:05.237 [190/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:05.237 [191/738] Generating lib/rte_bitratestats_def with a custom command 00:03:05.237 [192/738] Generating lib/rte_bitratestats_mingw with a custom command 00:03:05.498 [193/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:05.498 [194/738] Linking static target lib/librte_bitratestats.a 00:03:05.498 [195/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:05.759 [196/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:05.759 [197/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:05.759 [198/738] Linking static target lib/librte_bbdev.a 00:03:05.759 [199/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.759 [200/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:06.021 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:06.021 [202/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:06.021 [203/738] Linking static target lib/librte_hash.a 00:03:06.021 [204/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.282 [205/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:06.282 [206/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:06.668 [207/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:06.668 [208/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:06.668 [209/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:06.669 [210/738] Generating lib/rte_bpf_def with a custom command 00:03:06.669 [211/738] Generating lib/rte_bpf_mingw with a custom command 00:03:06.669 [212/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.669 [213/738] Generating lib/rte_cfgfile_def with a custom command 00:03:06.669 [214/738] Generating lib/rte_cfgfile_mingw with a custom command 00:03:06.669 [215/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:06.669 [216/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:06.669 [217/738] Linking static target lib/librte_cfgfile.a 00:03:06.669 [218/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:06.930 [219/738] Generating lib/rte_compressdev_def with a custom command 00:03:06.930 [220/738] Generating lib/rte_compressdev_mingw with a custom command 00:03:06.930 [221/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:06.930 [222/738] Linking static target lib/librte_acl.a 00:03:06.930 [223/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:06.930 [224/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:06.930 [225/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:06.930 [226/738] Generating lib/rte_cryptodev_def with a custom command 00:03:06.930 [227/738] Generating lib/rte_cryptodev_mingw with a custom command 00:03:06.930 [228/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.930 [229/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:06.930 [230/738] Linking static target lib/librte_bpf.a 00:03:07.189 [231/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:07.189 [232/738] Linking static target lib/librte_compressdev.a 00:03:07.189 [233/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.189 [234/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:07.189 [235/738] Generating lib/rte_distributor_def with a custom command 00:03:07.189 [236/738] Generating lib/rte_distributor_mingw with a custom command 00:03:07.189 [237/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.189 [238/738] Generating lib/rte_efd_def with a custom command 00:03:07.189 [239/738] Generating lib/rte_efd_mingw with a custom command 00:03:07.189 [240/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:07.447 [241/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:07.447 [242/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:07.447 [243/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.447 [244/738] Linking target lib/librte_eal.so.23.0 00:03:07.447 [245/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.448 [246/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:07.706 [247/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:07.706 [248/738] Linking target lib/librte_ring.so.23.0 00:03:07.706 [249/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:07.706 [250/738] Linking target lib/librte_rcu.so.23.0 00:03:07.706 [251/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:07.706 [252/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:07.706 [253/738] Linking target lib/librte_mempool.so.23.0 00:03:07.706 [254/738] Linking target lib/librte_meter.so.23.0 00:03:07.706 [255/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:07.706 [256/738] Linking target lib/librte_pci.so.23.0 00:03:07.706 [257/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:07.964 [258/738] Linking target lib/librte_timer.so.23.0 00:03:07.964 [259/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:07.964 [260/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:07.964 [261/738] Linking target lib/librte_mbuf.so.23.0 00:03:07.964 [262/738] Linking target lib/librte_acl.so.23.0 00:03:07.964 [263/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:07.964 [264/738] Linking target lib/librte_cfgfile.so.23.0 00:03:07.964 [265/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:07.964 [266/738] Linking static target lib/librte_distributor.a 00:03:07.964 [267/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:07.964 [268/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:07.964 [269/738] Linking target lib/librte_net.so.23.0 00:03:07.964 [270/738] Linking target lib/librte_bbdev.so.23.0 00:03:07.964 [271/738] Linking target lib/librte_compressdev.so.23.0 00:03:08.222 [272/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:08.222 [273/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.222 [274/738] Linking target lib/librte_cmdline.so.23.0 00:03:08.222 [275/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:08.222 [276/738] Generating lib/rte_eventdev_def with a custom command 00:03:08.222 [277/738] Linking target lib/librte_hash.so.23.0 00:03:08.222 [278/738] Linking target lib/librte_distributor.so.23.0 00:03:08.222 [279/738] Generating lib/rte_eventdev_mingw with a custom command 00:03:08.222 [280/738] Generating lib/rte_gpudev_def with a custom command 00:03:08.222 [281/738] Generating lib/rte_gpudev_mingw with a custom command 00:03:08.222 [282/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:08.222 [283/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:08.222 [284/738] Linking static target lib/librte_efd.a 00:03:08.222 [285/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.480 [286/738] Linking target lib/librte_ethdev.so.23.0 00:03:08.480 [287/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:08.480 [288/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:08.480 [289/738] Linking target lib/librte_metrics.so.23.0 00:03:08.480 [290/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.480 [291/738] Linking target lib/librte_bpf.so.23.0 00:03:08.480 [292/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:08.480 [293/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:08.480 [294/738] Linking static target lib/librte_cryptodev.a 00:03:08.480 [295/738] Linking target lib/librte_bitratestats.so.23.0 00:03:08.480 [296/738] Linking target lib/librte_efd.so.23.0 00:03:08.738 [297/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:08.738 [298/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:08.738 [299/738] Generating lib/rte_gro_def with a custom command 00:03:08.738 [300/738] Generating lib/rte_gro_mingw with a custom command 00:03:08.738 [301/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:08.738 [302/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:08.738 [303/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:08.738 [304/738] Linking static target lib/librte_gpudev.a 00:03:08.995 [305/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:08.995 [306/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:08.995 [307/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:08.995 [308/738] Linking static target lib/librte_gro.a 00:03:08.995 [309/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:08.995 [310/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:08.995 [311/738] Generating lib/rte_gso_def with a custom command 00:03:08.995 [312/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:08.995 [313/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:09.253 [314/738] Generating lib/rte_gso_mingw with a custom command 00:03:09.253 [315/738] Linking static target lib/librte_eventdev.a 00:03:09.253 [316/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.253 [317/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:09.253 [318/738] Linking target lib/librte_gro.so.23.0 00:03:09.253 [319/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:09.253 [320/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:09.253 [321/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.253 [322/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:09.253 [323/738] Linking static target lib/librte_gso.a 00:03:09.253 [324/738] Linking target lib/librte_gpudev.so.23.0 00:03:09.253 [325/738] Generating lib/rte_ip_frag_def with a custom command 00:03:09.511 [326/738] Generating lib/rte_ip_frag_mingw with a custom command 00:03:09.511 [327/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.511 [328/738] Linking target lib/librte_gso.so.23.0 00:03:09.511 [329/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:09.511 [330/738] Generating lib/rte_jobstats_def with a custom command 00:03:09.511 [331/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:09.511 [332/738] Linking static target lib/librte_jobstats.a 00:03:09.511 [333/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:09.511 [334/738] Generating lib/rte_jobstats_mingw with a custom command 00:03:09.511 [335/738] Generating lib/rte_latencystats_def with a custom command 00:03:09.511 [336/738] Generating lib/rte_latencystats_mingw with a custom command 00:03:09.511 [337/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:09.511 [338/738] Generating lib/rte_lpm_def with a custom command 00:03:09.770 [339/738] Generating lib/rte_lpm_mingw with a custom command 00:03:09.770 [340/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:09.770 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:09.770 [342/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:09.770 [343/738] Linking static target lib/librte_ip_frag.a 00:03:09.770 [344/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.770 [345/738] Linking target lib/librte_jobstats.so.23.0 00:03:10.028 [346/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:10.028 [347/738] Linking static target lib/librte_latencystats.a 00:03:10.028 [348/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.028 [349/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:10.028 [350/738] Linking target lib/librte_ip_frag.so.23.0 00:03:10.028 [351/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:10.028 [352/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.028 [353/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.028 [354/738] Generating lib/rte_member_def with a custom command 00:03:10.028 [355/738] Linking target lib/librte_latencystats.so.23.0 00:03:10.028 [356/738] Generating lib/rte_member_mingw with a custom command 00:03:10.028 [357/738] Linking target lib/librte_cryptodev.so.23.0 00:03:10.028 [358/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:10.028 [359/738] Generating lib/rte_pcapng_def with a custom command 00:03:10.286 [360/738] Generating lib/rte_pcapng_mingw with a custom command 00:03:10.286 [361/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:10.286 [362/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:10.286 [363/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:10.286 [364/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:10.286 [365/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:10.286 [366/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:10.544 [367/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:10.544 [368/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:10.545 [369/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:10.545 [370/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:10.545 [371/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:10.545 [372/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.545 [373/738] Generating lib/rte_power_def with a custom command 00:03:10.545 [374/738] Linking static target lib/librte_lpm.a 00:03:10.545 [375/738] Generating lib/rte_power_mingw with a custom command 00:03:10.545 [376/738] Linking target lib/librte_eventdev.so.23.0 00:03:10.545 [377/738] Generating lib/rte_rawdev_def with a custom command 00:03:10.545 [378/738] Generating lib/rte_rawdev_mingw with a custom command 00:03:10.545 [379/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:10.803 [380/738] Generating lib/rte_regexdev_def with a custom command 00:03:10.803 [381/738] Generating lib/rte_regexdev_mingw with a custom command 00:03:10.803 [382/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:10.803 [383/738] Generating lib/rte_dmadev_def with a custom command 00:03:10.803 [384/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:10.803 [385/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:10.803 [386/738] Generating lib/rte_dmadev_mingw with a custom command 00:03:10.803 [387/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:10.803 [388/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.803 [389/738] Linking static target lib/librte_pcapng.a 00:03:10.803 [390/738] Generating lib/rte_rib_def with a custom command 00:03:10.803 [391/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:10.803 [392/738] Linking static target lib/librte_rawdev.a 00:03:10.803 [393/738] Linking target lib/librte_lpm.so.23.0 00:03:10.803 [394/738] Generating lib/rte_rib_mingw with a custom command 00:03:10.803 [395/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:10.803 [396/738] Linking static target lib/librte_power.a 00:03:11.061 [397/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:11.061 [398/738] Generating lib/rte_reorder_def with a custom command 00:03:11.061 [399/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:11.061 [400/738] Generating lib/rte_reorder_mingw with a custom command 00:03:11.061 [401/738] Linking static target lib/librte_dmadev.a 00:03:11.061 [402/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.061 [403/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:11.061 [404/738] Linking static target lib/librte_member.a 00:03:11.061 [405/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:11.061 [406/738] Linking static target lib/librte_regexdev.a 00:03:11.061 [407/738] Linking target lib/librte_pcapng.so.23.0 00:03:11.061 [408/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:11.061 [409/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.319 [410/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:11.319 [411/738] Linking target lib/librte_rawdev.so.23.0 00:03:11.319 [412/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:11.319 [413/738] Linking static target lib/librte_reorder.a 00:03:11.319 [414/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:11.319 [415/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:11.319 [416/738] Generating lib/rte_sched_def with a custom command 00:03:11.319 [417/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:11.319 [418/738] Generating lib/rte_sched_mingw with a custom command 00:03:11.319 [419/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.319 [420/738] Generating lib/rte_security_mingw with a custom command 00:03:11.319 [421/738] Generating lib/rte_security_def with a custom command 00:03:11.319 [422/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.319 [423/738] Linking target lib/librte_member.so.23.0 00:03:11.319 [424/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:11.319 [425/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:11.319 [426/738] Linking target lib/librte_dmadev.so.23.0 00:03:11.319 [427/738] Linking static target lib/librte_rib.a 00:03:11.319 [428/738] Generating lib/rte_stack_def with a custom command 00:03:11.319 [429/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:11.319 [430/738] Generating lib/rte_stack_mingw with a custom command 00:03:11.319 [431/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.319 [432/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:11.319 [433/738] Linking static target lib/librte_stack.a 00:03:11.319 [434/738] Linking target lib/librte_reorder.so.23.0 00:03:11.576 [435/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:11.576 [436/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:11.576 [437/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.576 [438/738] Linking target lib/librte_power.so.23.0 00:03:11.576 [439/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.576 [440/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.576 [441/738] Linking target lib/librte_stack.so.23.0 00:03:11.576 [442/738] Linking target lib/librte_regexdev.so.23.0 00:03:11.835 [443/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.835 [444/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:11.835 [445/738] Linking static target lib/librte_security.a 00:03:11.835 [446/738] Linking target lib/librte_rib.so.23.0 00:03:11.835 [447/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:11.835 [448/738] Generating lib/rte_vhost_def with a custom command 00:03:11.835 [449/738] Generating lib/rte_vhost_mingw with a custom command 00:03:11.835 [450/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:11.835 [451/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:12.093 [452/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.093 [453/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:12.093 [454/738] Linking target lib/librte_security.so.23.0 00:03:12.093 [455/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:12.093 [456/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:12.093 [457/738] Linking static target lib/librte_sched.a 00:03:12.352 [458/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:12.352 [459/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:12.352 [460/738] Generating lib/rte_ipsec_def with a custom command 00:03:12.352 [461/738] Generating lib/rte_ipsec_mingw with a custom command 00:03:12.352 [462/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:12.352 [463/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:12.352 [464/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.352 [465/738] Linking target lib/librte_sched.so.23.0 00:03:12.352 [466/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:12.612 [467/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:12.612 [468/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:12.612 [469/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:12.612 [470/738] Generating lib/rte_fib_def with a custom command 00:03:12.612 [471/738] Generating lib/rte_fib_mingw with a custom command 00:03:12.612 [472/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:12.870 [473/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:12.870 [474/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:12.870 [475/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:12.870 [476/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:12.870 [477/738] Linking static target lib/librte_ipsec.a 00:03:13.129 [478/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:13.129 [479/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:13.129 [480/738] Linking static target lib/librte_fib.a 00:03:13.129 [481/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:13.129 [482/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:13.129 [483/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.387 [484/738] Linking target lib/librte_ipsec.so.23.0 00:03:13.387 [485/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:13.387 [486/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.387 [487/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:13.387 [488/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:13.387 [489/738] Linking target lib/librte_fib.so.23.0 00:03:13.646 [490/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:13.646 [491/738] Generating lib/rte_port_def with a custom command 00:03:13.646 [492/738] Generating lib/rte_port_mingw with a custom command 00:03:13.646 [493/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:13.646 [494/738] Generating lib/rte_pdump_def with a custom command 00:03:13.905 [495/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:13.905 [496/738] Generating lib/rte_pdump_mingw with a custom command 00:03:13.905 [497/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:13.905 [498/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:13.905 [499/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:13.905 [500/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:13.905 [501/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:13.905 [502/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:14.162 [503/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:14.162 [504/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:14.162 [505/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:14.162 [506/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:14.162 [507/738] Linking static target lib/librte_port.a 00:03:14.420 [508/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:14.420 [509/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:14.420 [510/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:14.420 [511/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:14.420 [512/738] Linking static target lib/librte_pdump.a 00:03:14.677 [513/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.677 [514/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.677 [515/738] Linking target lib/librte_pdump.so.23.0 00:03:14.677 [516/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:14.677 [517/738] Linking target lib/librte_port.so.23.0 00:03:14.677 [518/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:14.677 [519/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:14.935 [520/738] Generating lib/rte_table_def with a custom command 00:03:14.935 [521/738] Generating lib/rte_table_mingw with a custom command 00:03:14.935 [522/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:14.935 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:14.935 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:14.935 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:14.935 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:14.935 [527/738] Generating lib/rte_pipeline_def with a custom command 00:03:14.935 [528/738] Generating lib/rte_pipeline_mingw with a custom command 00:03:14.935 [529/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:15.193 [530/738] Linking static target lib/librte_table.a 00:03:15.193 [531/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:15.450 [532/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:15.450 [533/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:15.450 [534/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.450 [535/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:15.450 [536/738] Linking target lib/librte_table.so.23.0 00:03:15.450 [537/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:15.708 [538/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:15.708 [539/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:15.708 [540/738] Generating lib/rte_graph_def with a custom command 00:03:15.708 [541/738] Generating lib/rte_graph_mingw with a custom command 00:03:15.708 [542/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:15.708 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:15.966 [544/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:15.966 [545/738] Linking static target lib/librte_graph.a 00:03:15.966 [546/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:15.966 [547/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:15.966 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:15.966 [549/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:15.966 [550/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:16.225 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:16.225 [552/738] Generating lib/rte_node_def with a custom command 00:03:16.225 [553/738] Generating lib/rte_node_mingw with a custom command 00:03:16.225 [554/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:16.225 [555/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:16.225 [556/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.487 [557/738] Linking target lib/librte_graph.so.23.0 00:03:16.487 [558/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:16.487 [559/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:16.487 [560/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:16.487 [561/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:16.487 [562/738] Generating drivers/rte_bus_pci_def with a custom command 00:03:16.487 [563/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:16.487 [564/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:16.487 [565/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:16.487 [566/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:16.487 [567/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:16.487 [568/738] Generating drivers/rte_bus_vdev_def with a custom command 00:03:16.487 [569/738] Generating drivers/rte_mempool_ring_def with a custom command 00:03:16.487 [570/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:16.487 [571/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:16.487 [572/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:16.747 [573/738] Linking static target lib/librte_node.a 00:03:16.747 [574/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:16.747 [575/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:16.747 [576/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:16.747 [577/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:16.747 [578/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:16.747 [579/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.747 [580/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:16.747 [581/738] Linking target lib/librte_node.so.23.0 00:03:16.747 [582/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:16.747 [583/738] Linking static target drivers/librte_bus_vdev.a 00:03:16.747 [584/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:16.747 [585/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:17.005 [586/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:17.005 [587/738] Linking static target drivers/librte_bus_pci.a 00:03:17.005 [588/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.005 [589/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:17.005 [590/738] Linking target drivers/librte_bus_vdev.so.23.0 00:03:17.005 [591/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:17.005 [592/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:17.264 [593/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:17.264 [594/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.264 [595/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:17.264 [596/738] Linking target drivers/librte_bus_pci.so.23.0 00:03:17.264 [597/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:17.264 [598/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:17.264 [599/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:17.523 [600/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:17.523 [601/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:17.523 [602/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:17.523 [603/738] Linking static target drivers/librte_mempool_ring.a 00:03:17.523 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:17.523 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:03:17.523 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:17.782 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:18.041 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:18.041 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:18.041 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:18.608 [611/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:18.608 [612/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:18.608 [613/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:18.608 [614/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:18.608 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:18.608 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:18.608 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:03:18.867 [618/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:18.867 [619/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:19.126 [620/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:19.692 [621/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:19.692 [622/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:19.692 [623/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:19.692 [624/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:19.692 [625/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:19.692 [626/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:19.692 [627/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:19.692 [628/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:19.950 [629/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:19.950 [630/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:19.950 [631/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:20.208 [632/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:20.208 [633/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:20.466 [634/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:20.466 [635/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:20.466 [636/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:20.466 [637/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:20.466 [638/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:20.724 [639/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:20.724 [640/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:20.724 [641/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:20.724 [642/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:20.724 [643/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:20.724 [644/738] Linking static target drivers/librte_net_i40e.a 00:03:20.724 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:20.724 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:20.983 [647/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:20.983 [648/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:20.983 [649/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.241 [650/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:21.241 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:21.241 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:21.241 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:21.241 [654/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:21.241 [655/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:21.241 [656/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:21.500 [657/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:21.500 [658/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:21.500 [659/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:21.500 [660/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:21.500 [661/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:21.759 [662/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:21.759 [663/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:21.759 [664/738] Linking static target lib/librte_vhost.a 00:03:21.759 [665/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:21.759 [666/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:22.327 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:22.327 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:22.327 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:22.327 [670/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:22.327 [671/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.327 [672/738] Linking target lib/librte_vhost.so.23.0 00:03:22.327 [673/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:22.585 [674/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:22.585 [675/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:22.585 [676/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:22.585 [677/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:22.844 [678/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:22.844 [679/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:22.844 [680/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:22.844 [681/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:22.844 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:22.844 [683/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:23.103 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:23.103 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:23.103 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:23.103 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:23.103 [688/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:23.361 [689/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:23.361 [690/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:23.361 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:23.620 [692/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:23.620 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:23.878 [694/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:23.878 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:23.878 [696/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:23.878 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:24.167 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:24.167 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:24.425 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:24.426 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:24.426 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:24.426 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:24.426 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:24.683 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:24.683 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:24.942 [707/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:24.942 [708/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:24.942 [709/738] Linking static target lib/librte_pipeline.a 00:03:24.942 [710/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:24.942 [711/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:25.199 [712/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:25.199 [713/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:25.199 [714/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:25.199 [715/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:25.199 [716/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:25.199 [717/738] Linking target app/dpdk-pdump 00:03:25.199 [718/738] Linking target app/dpdk-proc-info 00:03:25.457 [719/738] Linking target app/dpdk-dumpcap 00:03:25.457 [720/738] Linking target app/dpdk-test-acl 00:03:25.457 [721/738] Linking target app/dpdk-test-bbdev 00:03:25.457 [722/738] Linking target app/dpdk-test-cmdline 00:03:25.457 [723/738] Linking target app/dpdk-test-compress-perf 00:03:25.457 [724/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:25.457 [725/738] Linking target app/dpdk-test-eventdev 00:03:25.457 [726/738] Linking target app/dpdk-test-fib 00:03:25.457 [727/738] Linking target app/dpdk-test-crypto-perf 00:03:25.714 [728/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:25.714 [729/738] Linking target app/dpdk-test-flow-perf 00:03:25.714 [730/738] Linking target app/dpdk-test-gpudev 00:03:25.714 [731/738] Linking target app/dpdk-test-pipeline 00:03:25.714 [732/738] Linking target app/dpdk-test-regex 00:03:25.714 [733/738] Linking target app/dpdk-test-sad 00:03:25.999 [734/738] Linking target app/dpdk-testpmd 00:03:26.257 [735/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:26.516 [736/738] Linking target app/dpdk-test-security-perf 00:03:27.452 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.710 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:27.710 23:52:17 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:27.710 23:52:17 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:27.710 23:52:17 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:27.710 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:27.710 [0/1] Installing files. 00:03:27.972 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.972 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:27.973 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.974 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:27.975 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.976 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:27.977 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.977 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:27.977 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.977 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.977 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.977 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:27.977 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:27.977 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.977 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.977 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.977 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.977 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.977 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.977 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.977 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:27.977 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.239 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.239 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.239 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.239 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.239 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.239 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.239 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.239 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.239 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.240 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.241 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:28.242 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:28.242 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:28.242 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:28.242 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:28.242 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:28.242 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:28.242 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:28.242 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:28.242 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:28.242 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:28.242 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:28.242 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:28.242 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:28.242 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:28.242 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:28.242 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:28.242 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:28.242 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:28.242 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:28.242 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:28.242 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:28.242 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:28.242 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:28.242 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:28.242 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:28.242 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:28.242 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:28.242 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:28.242 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:28.242 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:28.242 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:28.242 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:28.242 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:28.242 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:28.242 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:28.242 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:28.242 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:28.242 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:28.242 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:28.242 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:28.242 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:28.242 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:28.242 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:28.242 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:28.242 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:28.242 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:28.242 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:28.242 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:28.242 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:28.242 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:28.242 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:28.242 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:28.242 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:28.242 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:28.242 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:28.242 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:28.242 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:28.242 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:28.242 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:28.242 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:28.242 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:28.242 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:28.242 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:28.242 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:28.242 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:28.242 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:28.242 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:28.242 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:28.242 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:28.242 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:28.242 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:28.243 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:28.243 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:28.243 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:28.243 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:28.243 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:28.243 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:28.243 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:28.243 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:28.243 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:28.243 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:28.243 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:28.243 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:28.243 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:28.243 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:28.243 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:28.243 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:28.243 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:28.243 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:28.243 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:28.243 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:28.243 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:28.243 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:28.243 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:28.243 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:28.243 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:28.243 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:28.243 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:28.243 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:28.243 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:28.243 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:28.243 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:28.243 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:28.243 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:28.243 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:28.243 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:28.243 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:28.243 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:28.243 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:28.243 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:28.243 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:28.243 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:28.243 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:28.243 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:28.243 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:28.243 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:28.243 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:28.243 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:28.243 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:28.243 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:28.243 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:28.243 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:28.243 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:28.243 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:28.243 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:28.243 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:28.243 23:52:18 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:28.243 23:52:18 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:28.243 00:03:28.243 real 0m33.536s 00:03:28.243 user 3m36.433s 00:03:28.243 sys 0m34.150s 00:03:28.243 23:52:18 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:28.243 ************************************ 00:03:28.243 END TEST build_native_dpdk 00:03:28.243 ************************************ 00:03:28.243 23:52:18 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:28.243 23:52:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:28.243 23:52:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:28.243 23:52:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:28.243 23:52:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:28.243 23:52:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:28.243 23:52:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:28.243 23:52:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:28.243 23:52:18 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:28.501 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:28.501 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:28.501 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:28.501 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:28.760 Using 'verbs' RDMA provider 00:03:39.668 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:51.913 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:51.913 Creating mk/config.mk...done. 00:03:51.913 Creating mk/cc.flags.mk...done. 00:03:51.913 Type 'make' to build. 00:03:51.913 23:52:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:51.913 23:52:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:51.913 23:52:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:51.913 23:52:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:51.913 ************************************ 00:03:51.913 START TEST make 00:03:51.913 ************************************ 00:03:51.913 23:52:40 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:51.913 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:51.913 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:51.913 meson setup builddir \ 00:03:51.913 -Dwith-libaio=enabled \ 00:03:51.913 -Dwith-liburing=enabled \ 00:03:51.913 -Dwith-libvfn=disabled \ 00:03:51.913 -Dwith-spdk=false && \ 00:03:51.913 meson compile -C builddir && \ 00:03:51.913 cd -) 00:03:51.913 make[1]: Nothing to be done for 'all'. 00:03:52.857 The Meson build system 00:03:52.857 Version: 1.5.0 00:03:52.857 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:52.857 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:52.857 Build type: native build 00:03:52.857 Project name: xnvme 00:03:52.857 Project version: 0.7.3 00:03:52.857 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:52.857 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:52.857 Host machine cpu family: x86_64 00:03:52.857 Host machine cpu: x86_64 00:03:52.857 Message: host_machine.system: linux 00:03:52.857 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:52.857 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:52.857 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:52.857 Run-time dependency threads found: YES 00:03:52.857 Has header "setupapi.h" : NO 00:03:52.857 Has header "linux/blkzoned.h" : YES 00:03:52.857 Has header "linux/blkzoned.h" : YES (cached) 00:03:52.857 Has header "libaio.h" : YES 00:03:52.857 Library aio found: YES 00:03:52.858 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:52.858 Run-time dependency liburing found: YES 2.2 00:03:52.858 Dependency libvfn skipped: feature with-libvfn disabled 00:03:52.858 Run-time dependency appleframeworks found: NO (tried framework) 00:03:52.858 Run-time dependency appleframeworks found: NO (tried framework) 00:03:52.858 Configuring xnvme_config.h using configuration 00:03:52.858 Configuring xnvme.spec using configuration 00:03:52.858 Run-time dependency bash-completion found: YES 2.11 00:03:52.858 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:52.858 Program cp found: YES (/usr/bin/cp) 00:03:52.858 Has header "winsock2.h" : NO 00:03:52.858 Has header "dbghelp.h" : NO 00:03:52.858 Library rpcrt4 found: NO 00:03:52.858 Library rt found: YES 00:03:52.858 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:52.858 Found CMake: /usr/bin/cmake (3.27.7) 00:03:52.858 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:52.858 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:52.858 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:52.858 Build targets in project: 32 00:03:52.858 00:03:52.858 xnvme 0.7.3 00:03:52.858 00:03:52.858 User defined options 00:03:52.858 with-libaio : enabled 00:03:52.858 with-liburing: enabled 00:03:52.858 with-libvfn : disabled 00:03:52.858 with-spdk : false 00:03:52.858 00:03:52.858 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:53.430 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:53.430 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:53.430 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:53.430 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:53.430 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:53.430 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:53.430 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:53.430 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:53.430 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:53.430 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:53.430 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:53.692 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:53.692 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:53.692 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:53.692 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:53.692 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:53.692 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:53.692 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:53.692 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:53.692 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:53.692 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:53.692 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:53.692 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:53.692 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:53.692 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:53.692 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:53.692 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:53.692 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:53.692 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:53.692 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:53.692 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:53.692 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:53.692 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:53.692 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:53.953 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:53.953 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:53.953 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:53.953 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:53.953 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:53.953 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:53.953 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:53.953 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:53.953 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:53.953 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:53.953 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:53.953 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:53.953 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:53.953 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:53.953 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:53.953 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:53.953 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:53.953 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:53.953 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:53.953 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:53.953 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:53.953 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:53.953 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:53.953 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:53.953 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:53.953 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:53.953 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:53.953 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:53.953 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:53.953 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:53.953 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:54.214 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:54.214 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:54.214 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:54.214 [68/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:54.214 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:54.214 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:54.214 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:54.214 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:54.214 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:54.214 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:54.214 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:54.215 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:54.215 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:54.215 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:54.215 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:54.215 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:54.215 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:54.215 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:54.215 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:54.475 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:54.475 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:54.475 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:54.475 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:54.475 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:54.475 [89/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:54.475 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:54.475 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:54.475 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:54.475 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:54.475 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:54.475 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:54.475 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:54.475 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:54.475 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:54.475 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:54.475 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:54.475 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:54.476 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:54.476 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:54.476 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:54.476 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:54.476 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:54.476 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:54.476 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:54.476 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:54.476 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:54.476 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:54.476 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:54.476 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:54.476 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:54.476 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:54.476 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:54.476 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:54.476 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:54.476 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:54.736 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:54.736 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:54.736 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:54.736 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:54.736 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:54.736 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:54.736 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:54.736 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:54.736 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:54.736 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:54.736 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:54.736 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:54.736 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:54.736 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:54.736 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:54.736 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:54.736 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:54.736 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:54.736 [138/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:54.736 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:54.736 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:54.996 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:54.996 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:54.997 [143/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:54.997 [144/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:54.997 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:54.997 [146/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:54.997 [147/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:54.997 [148/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:54.997 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:54.997 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:54.997 [151/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:54.997 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:54.997 [153/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:54.997 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:54.997 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:55.257 [156/203] Linking target lib/libxnvme.so 00:03:55.257 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:55.257 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:55.257 [159/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:55.257 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:55.257 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:55.257 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:55.257 [163/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:55.257 [164/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:55.257 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:55.257 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:55.257 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:55.257 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:55.257 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:55.257 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:55.257 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:55.518 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:55.518 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:55.518 [174/203] Linking static target lib/libxnvme.a 00:03:55.518 [175/203] Linking target tests/xnvme_tests_lblk 00:03:55.518 [176/203] Linking target tests/xnvme_tests_buf 00:03:55.518 [177/203] Linking target tests/xnvme_tests_async_intf 00:03:55.518 [178/203] Linking target tests/xnvme_tests_cli 00:03:55.518 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:03:55.518 [180/203] Linking target tests/xnvme_tests_znd_append 00:03:55.518 [181/203] Linking target tests/xnvme_tests_enum 00:03:55.518 [182/203] Linking target tests/xnvme_tests_scc 00:03:55.518 [183/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:55.518 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:55.518 [185/203] Linking target tests/xnvme_tests_ioworker 00:03:55.518 [186/203] Linking target tests/xnvme_tests_znd_state 00:03:55.518 [187/203] Linking target tools/xdd 00:03:55.518 [188/203] Linking target tests/xnvme_tests_kvs 00:03:55.518 [189/203] Linking target tools/xnvme 00:03:55.518 [190/203] Linking target tests/xnvme_tests_map 00:03:55.518 [191/203] Linking target tools/lblk 00:03:55.518 [192/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:55.518 [193/203] Linking target examples/xnvme_enum 00:03:55.518 [194/203] Linking target examples/xnvme_dev 00:03:55.518 [195/203] Linking target tools/xnvme_file 00:03:55.518 [196/203] Linking target tools/zoned 00:03:55.518 [197/203] Linking target tools/kvs 00:03:55.518 [198/203] Linking target examples/xnvme_hello 00:03:55.518 [199/203] Linking target examples/xnvme_single_async 00:03:55.518 [200/203] Linking target examples/xnvme_io_async 00:03:55.518 [201/203] Linking target examples/xnvme_single_sync 00:03:55.518 [202/203] Linking target examples/zoned_io_async 00:03:55.518 [203/203] Linking target examples/zoned_io_sync 00:03:55.518 INFO: autodetecting backend as ninja 00:03:55.518 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:55.518 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:27.626 CC lib/log/log_deprecated.o 00:04:27.626 CC lib/log/log.o 00:04:27.626 CC lib/log/log_flags.o 00:04:27.626 CC lib/ut_mock/mock.o 00:04:27.626 CC lib/ut/ut.o 00:04:27.626 LIB libspdk_log.a 00:04:27.626 LIB libspdk_ut.a 00:04:27.626 LIB libspdk_ut_mock.a 00:04:27.626 SO libspdk_log.so.7.0 00:04:27.626 SO libspdk_ut.so.2.0 00:04:27.626 SO libspdk_ut_mock.so.6.0 00:04:27.626 SYMLINK libspdk_ut.so 00:04:27.626 SYMLINK libspdk_ut_mock.so 00:04:27.626 SYMLINK libspdk_log.so 00:04:27.626 CC lib/ioat/ioat.o 00:04:27.626 CC lib/dma/dma.o 00:04:27.626 CC lib/util/base64.o 00:04:27.626 CC lib/util/bit_array.o 00:04:27.626 CC lib/util/cpuset.o 00:04:27.626 CC lib/util/crc16.o 00:04:27.626 CXX lib/trace_parser/trace.o 00:04:27.626 CC lib/util/crc32c.o 00:04:27.626 CC lib/util/crc32.o 00:04:27.626 CC lib/vfio_user/host/vfio_user_pci.o 00:04:27.626 CC lib/util/crc32_ieee.o 00:04:27.626 CC lib/util/crc64.o 00:04:27.626 CC lib/util/dif.o 00:04:27.626 CC lib/util/fd.o 00:04:27.626 LIB libspdk_dma.a 00:04:27.626 CC lib/vfio_user/host/vfio_user.o 00:04:27.626 SO libspdk_dma.so.5.0 00:04:27.626 CC lib/util/fd_group.o 00:04:27.626 CC lib/util/file.o 00:04:27.626 CC lib/util/hexlify.o 00:04:27.626 SYMLINK libspdk_dma.so 00:04:27.626 CC lib/util/iov.o 00:04:27.626 LIB libspdk_ioat.a 00:04:27.626 CC lib/util/math.o 00:04:27.626 SO libspdk_ioat.so.7.0 00:04:27.626 CC lib/util/net.o 00:04:27.626 SYMLINK libspdk_ioat.so 00:04:27.626 CC lib/util/pipe.o 00:04:27.626 CC lib/util/strerror_tls.o 00:04:27.626 CC lib/util/string.o 00:04:27.626 LIB libspdk_vfio_user.a 00:04:27.626 CC lib/util/uuid.o 00:04:27.626 SO libspdk_vfio_user.so.5.0 00:04:27.626 CC lib/util/xor.o 00:04:27.626 CC lib/util/zipf.o 00:04:27.626 SYMLINK libspdk_vfio_user.so 00:04:27.626 CC lib/util/md5.o 00:04:27.626 LIB libspdk_util.a 00:04:27.626 SO libspdk_util.so.10.0 00:04:27.626 LIB libspdk_trace_parser.a 00:04:27.626 SO libspdk_trace_parser.so.6.0 00:04:27.626 SYMLINK libspdk_util.so 00:04:27.627 SYMLINK libspdk_trace_parser.so 00:04:27.627 CC lib/vmd/vmd.o 00:04:27.627 CC lib/conf/conf.o 00:04:27.627 CC lib/vmd/led.o 00:04:27.627 CC lib/idxd/idxd.o 00:04:27.627 CC lib/idxd/idxd_user.o 00:04:27.627 CC lib/idxd/idxd_kernel.o 00:04:27.627 CC lib/json/json_parse.o 00:04:27.627 CC lib/env_dpdk/env.o 00:04:27.627 CC lib/rdma_provider/common.o 00:04:27.627 CC lib/rdma_utils/rdma_utils.o 00:04:27.627 CC lib/env_dpdk/memory.o 00:04:27.627 CC lib/env_dpdk/pci.o 00:04:27.627 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:27.627 LIB libspdk_conf.a 00:04:27.627 CC lib/json/json_util.o 00:04:27.627 SO libspdk_conf.so.6.0 00:04:27.627 CC lib/json/json_write.o 00:04:27.627 SYMLINK libspdk_conf.so 00:04:27.627 LIB libspdk_rdma_utils.a 00:04:27.627 CC lib/env_dpdk/init.o 00:04:27.627 SO libspdk_rdma_utils.so.1.0 00:04:27.627 LIB libspdk_rdma_provider.a 00:04:27.627 SYMLINK libspdk_rdma_utils.so 00:04:27.627 CC lib/env_dpdk/threads.o 00:04:27.627 SO libspdk_rdma_provider.so.6.0 00:04:27.627 SYMLINK libspdk_rdma_provider.so 00:04:27.627 CC lib/env_dpdk/pci_ioat.o 00:04:27.627 CC lib/env_dpdk/pci_virtio.o 00:04:27.627 CC lib/env_dpdk/pci_vmd.o 00:04:27.627 LIB libspdk_json.a 00:04:27.627 CC lib/env_dpdk/pci_idxd.o 00:04:27.627 SO libspdk_json.so.6.0 00:04:27.627 CC lib/env_dpdk/pci_event.o 00:04:27.627 CC lib/env_dpdk/sigbus_handler.o 00:04:27.627 SYMLINK libspdk_json.so 00:04:27.627 CC lib/env_dpdk/pci_dpdk.o 00:04:27.627 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:27.627 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:27.627 LIB libspdk_idxd.a 00:04:27.627 LIB libspdk_vmd.a 00:04:27.627 SO libspdk_idxd.so.12.1 00:04:27.627 SO libspdk_vmd.so.6.0 00:04:27.627 CC lib/jsonrpc/jsonrpc_server.o 00:04:27.627 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:27.627 SYMLINK libspdk_idxd.so 00:04:27.627 CC lib/jsonrpc/jsonrpc_client.o 00:04:27.627 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:27.627 SYMLINK libspdk_vmd.so 00:04:27.627 LIB libspdk_jsonrpc.a 00:04:27.627 SO libspdk_jsonrpc.so.6.0 00:04:27.627 SYMLINK libspdk_jsonrpc.so 00:04:27.627 CC lib/rpc/rpc.o 00:04:27.627 LIB libspdk_env_dpdk.a 00:04:27.627 LIB libspdk_rpc.a 00:04:27.627 SO libspdk_rpc.so.6.0 00:04:27.627 SO libspdk_env_dpdk.so.15.0 00:04:27.627 SYMLINK libspdk_rpc.so 00:04:27.627 SYMLINK libspdk_env_dpdk.so 00:04:27.627 CC lib/notify/notify.o 00:04:27.627 CC lib/notify/notify_rpc.o 00:04:27.627 CC lib/trace/trace.o 00:04:27.627 CC lib/trace/trace_flags.o 00:04:27.627 CC lib/trace/trace_rpc.o 00:04:27.627 CC lib/keyring/keyring.o 00:04:27.627 CC lib/keyring/keyring_rpc.o 00:04:27.885 LIB libspdk_notify.a 00:04:27.885 SO libspdk_notify.so.6.0 00:04:27.885 LIB libspdk_trace.a 00:04:27.885 SYMLINK libspdk_notify.so 00:04:27.885 LIB libspdk_keyring.a 00:04:27.885 SO libspdk_trace.so.11.0 00:04:27.885 SO libspdk_keyring.so.2.0 00:04:27.885 SYMLINK libspdk_trace.so 00:04:28.144 SYMLINK libspdk_keyring.so 00:04:28.144 CC lib/sock/sock.o 00:04:28.144 CC lib/sock/sock_rpc.o 00:04:28.144 CC lib/thread/thread.o 00:04:28.144 CC lib/thread/iobuf.o 00:04:28.711 LIB libspdk_sock.a 00:04:28.711 SO libspdk_sock.so.10.0 00:04:28.711 SYMLINK libspdk_sock.so 00:04:28.977 CC lib/nvme/nvme_ctrlr.o 00:04:28.977 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:28.977 CC lib/nvme/nvme_fabric.o 00:04:28.977 CC lib/nvme/nvme_pcie.o 00:04:28.977 CC lib/nvme/nvme_ns_cmd.o 00:04:28.977 CC lib/nvme/nvme.o 00:04:28.977 CC lib/nvme/nvme_qpair.o 00:04:28.977 CC lib/nvme/nvme_ns.o 00:04:28.977 CC lib/nvme/nvme_pcie_common.o 00:04:29.544 CC lib/nvme/nvme_quirks.o 00:04:29.544 CC lib/nvme/nvme_transport.o 00:04:29.544 CC lib/nvme/nvme_discovery.o 00:04:29.544 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:29.804 LIB libspdk_thread.a 00:04:29.804 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:29.804 CC lib/nvme/nvme_tcp.o 00:04:29.804 SO libspdk_thread.so.10.1 00:04:29.804 CC lib/nvme/nvme_opal.o 00:04:29.804 SYMLINK libspdk_thread.so 00:04:29.804 CC lib/nvme/nvme_io_msg.o 00:04:29.804 CC lib/nvme/nvme_poll_group.o 00:04:30.065 CC lib/nvme/nvme_zns.o 00:04:30.065 CC lib/nvme/nvme_stubs.o 00:04:30.065 CC lib/nvme/nvme_auth.o 00:04:30.324 CC lib/nvme/nvme_cuse.o 00:04:30.324 CC lib/nvme/nvme_rdma.o 00:04:30.585 CC lib/accel/accel.o 00:04:30.585 CC lib/blob/blobstore.o 00:04:30.585 CC lib/init/json_config.o 00:04:30.585 CC lib/virtio/virtio.o 00:04:30.585 CC lib/fsdev/fsdev.o 00:04:30.846 CC lib/init/subsystem.o 00:04:30.846 CC lib/virtio/virtio_vhost_user.o 00:04:31.107 CC lib/init/subsystem_rpc.o 00:04:31.107 CC lib/fsdev/fsdev_io.o 00:04:31.107 CC lib/init/rpc.o 00:04:31.107 CC lib/blob/request.o 00:04:31.107 CC lib/blob/zeroes.o 00:04:31.107 LIB libspdk_init.a 00:04:31.107 SO libspdk_init.so.6.0 00:04:31.107 CC lib/blob/blob_bs_dev.o 00:04:31.107 SYMLINK libspdk_init.so 00:04:31.107 CC lib/fsdev/fsdev_rpc.o 00:04:31.367 CC lib/virtio/virtio_vfio_user.o 00:04:31.367 CC lib/accel/accel_rpc.o 00:04:31.367 CC lib/accel/accel_sw.o 00:04:31.367 LIB libspdk_fsdev.a 00:04:31.367 LIB libspdk_nvme.a 00:04:31.367 CC lib/event/app.o 00:04:31.367 CC lib/event/reactor.o 00:04:31.367 CC lib/event/log_rpc.o 00:04:31.367 SO libspdk_fsdev.so.1.0 00:04:31.367 SYMLINK libspdk_fsdev.so 00:04:31.368 CC lib/event/app_rpc.o 00:04:31.368 SO libspdk_nvme.so.14.0 00:04:31.627 CC lib/virtio/virtio_pci.o 00:04:31.627 CC lib/event/scheduler_static.o 00:04:31.627 LIB libspdk_accel.a 00:04:31.627 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:31.627 SO libspdk_accel.so.16.0 00:04:31.627 SYMLINK libspdk_accel.so 00:04:31.627 SYMLINK libspdk_nvme.so 00:04:31.627 LIB libspdk_virtio.a 00:04:31.888 SO libspdk_virtio.so.7.0 00:04:31.888 SYMLINK libspdk_virtio.so 00:04:31.888 LIB libspdk_event.a 00:04:31.888 CC lib/bdev/bdev_zone.o 00:04:31.888 CC lib/bdev/bdev.o 00:04:31.888 CC lib/bdev/part.o 00:04:31.888 CC lib/bdev/bdev_rpc.o 00:04:31.888 CC lib/bdev/scsi_nvme.o 00:04:31.888 SO libspdk_event.so.14.0 00:04:31.888 SYMLINK libspdk_event.so 00:04:32.148 LIB libspdk_fuse_dispatcher.a 00:04:32.406 SO libspdk_fuse_dispatcher.so.1.0 00:04:32.406 SYMLINK libspdk_fuse_dispatcher.so 00:04:33.340 LIB libspdk_blob.a 00:04:33.341 SO libspdk_blob.so.11.0 00:04:33.599 SYMLINK libspdk_blob.so 00:04:33.599 CC lib/blobfs/blobfs.o 00:04:33.599 CC lib/blobfs/tree.o 00:04:33.599 CC lib/lvol/lvol.o 00:04:34.166 LIB libspdk_bdev.a 00:04:34.166 SO libspdk_bdev.so.16.0 00:04:34.166 SYMLINK libspdk_bdev.so 00:04:34.424 CC lib/nbd/nbd.o 00:04:34.424 CC lib/ublk/ublk.o 00:04:34.424 CC lib/ublk/ublk_rpc.o 00:04:34.424 CC lib/nbd/nbd_rpc.o 00:04:34.424 CC lib/ftl/ftl_core.o 00:04:34.424 CC lib/ftl/ftl_init.o 00:04:34.424 CC lib/scsi/dev.o 00:04:34.424 CC lib/nvmf/ctrlr.o 00:04:34.424 CC lib/nvmf/ctrlr_discovery.o 00:04:34.424 CC lib/nvmf/ctrlr_bdev.o 00:04:34.424 LIB libspdk_blobfs.a 00:04:34.726 CC lib/scsi/lun.o 00:04:34.726 SO libspdk_blobfs.so.10.0 00:04:34.726 SYMLINK libspdk_blobfs.so 00:04:34.726 CC lib/scsi/port.o 00:04:34.726 LIB libspdk_lvol.a 00:04:34.726 CC lib/ftl/ftl_layout.o 00:04:34.726 SO libspdk_lvol.so.10.0 00:04:34.726 LIB libspdk_nbd.a 00:04:34.726 SO libspdk_nbd.so.7.0 00:04:34.726 SYMLINK libspdk_lvol.so 00:04:34.726 CC lib/ftl/ftl_debug.o 00:04:34.726 CC lib/nvmf/subsystem.o 00:04:34.726 CC lib/scsi/scsi.o 00:04:34.726 CC lib/scsi/scsi_bdev.o 00:04:34.726 SYMLINK libspdk_nbd.so 00:04:34.726 CC lib/scsi/scsi_pr.o 00:04:34.994 CC lib/nvmf/nvmf.o 00:04:34.995 CC lib/ftl/ftl_io.o 00:04:34.995 CC lib/scsi/scsi_rpc.o 00:04:34.995 CC lib/scsi/task.o 00:04:34.995 LIB libspdk_ublk.a 00:04:34.995 CC lib/nvmf/nvmf_rpc.o 00:04:34.995 SO libspdk_ublk.so.3.0 00:04:34.995 CC lib/nvmf/transport.o 00:04:34.995 SYMLINK libspdk_ublk.so 00:04:34.995 CC lib/nvmf/tcp.o 00:04:35.254 CC lib/ftl/ftl_sb.o 00:04:35.254 CC lib/ftl/ftl_l2p.o 00:04:35.254 CC lib/nvmf/stubs.o 00:04:35.254 LIB libspdk_scsi.a 00:04:35.254 SO libspdk_scsi.so.9.0 00:04:35.254 CC lib/ftl/ftl_l2p_flat.o 00:04:35.511 CC lib/ftl/ftl_nv_cache.o 00:04:35.511 SYMLINK libspdk_scsi.so 00:04:35.512 CC lib/nvmf/mdns_server.o 00:04:35.512 CC lib/ftl/ftl_band.o 00:04:35.512 CC lib/ftl/ftl_band_ops.o 00:04:35.770 CC lib/ftl/ftl_writer.o 00:04:35.770 CC lib/ftl/ftl_rq.o 00:04:35.770 CC lib/ftl/ftl_reloc.o 00:04:35.770 CC lib/nvmf/rdma.o 00:04:35.770 CC lib/ftl/ftl_l2p_cache.o 00:04:35.770 CC lib/nvmf/auth.o 00:04:36.029 CC lib/ftl/ftl_p2l.o 00:04:36.029 CC lib/ftl/ftl_p2l_log.o 00:04:36.029 CC lib/ftl/mngt/ftl_mngt.o 00:04:36.029 CC lib/iscsi/conn.o 00:04:36.029 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:36.288 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:36.288 CC lib/iscsi/init_grp.o 00:04:36.288 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:36.288 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:36.288 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:36.288 CC lib/vhost/vhost.o 00:04:36.546 CC lib/vhost/vhost_rpc.o 00:04:36.546 CC lib/vhost/vhost_scsi.o 00:04:36.546 CC lib/vhost/vhost_blk.o 00:04:36.546 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:36.546 CC lib/vhost/rte_vhost_user.o 00:04:36.546 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:36.804 CC lib/iscsi/iscsi.o 00:04:36.804 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:36.804 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:36.804 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:36.804 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:36.804 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:37.063 CC lib/iscsi/param.o 00:04:37.063 CC lib/iscsi/portal_grp.o 00:04:37.063 CC lib/ftl/utils/ftl_conf.o 00:04:37.063 CC lib/ftl/utils/ftl_md.o 00:04:37.063 CC lib/ftl/utils/ftl_mempool.o 00:04:37.063 CC lib/iscsi/tgt_node.o 00:04:37.322 CC lib/iscsi/iscsi_subsystem.o 00:04:37.322 CC lib/ftl/utils/ftl_bitmap.o 00:04:37.322 CC lib/ftl/utils/ftl_property.o 00:04:37.322 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:37.322 CC lib/iscsi/iscsi_rpc.o 00:04:37.322 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:37.322 LIB libspdk_vhost.a 00:04:37.322 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:37.322 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:37.322 SO libspdk_vhost.so.8.0 00:04:37.580 CC lib/iscsi/task.o 00:04:37.580 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:37.580 SYMLINK libspdk_vhost.so 00:04:37.580 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:37.580 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:37.580 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:37.580 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:37.580 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:37.580 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:37.580 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:37.580 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:37.580 CC lib/ftl/base/ftl_base_dev.o 00:04:37.580 CC lib/ftl/base/ftl_base_bdev.o 00:04:37.580 CC lib/ftl/ftl_trace.o 00:04:37.838 LIB libspdk_nvmf.a 00:04:37.838 LIB libspdk_ftl.a 00:04:38.096 SO libspdk_nvmf.so.19.0 00:04:38.096 SO libspdk_ftl.so.9.0 00:04:38.096 LIB libspdk_iscsi.a 00:04:38.096 SYMLINK libspdk_nvmf.so 00:04:38.096 SO libspdk_iscsi.so.8.0 00:04:38.355 SYMLINK libspdk_ftl.so 00:04:38.355 SYMLINK libspdk_iscsi.so 00:04:38.613 CC module/env_dpdk/env_dpdk_rpc.o 00:04:38.613 CC module/accel/ioat/accel_ioat.o 00:04:38.613 CC module/sock/posix/posix.o 00:04:38.613 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:38.613 CC module/accel/iaa/accel_iaa.o 00:04:38.613 CC module/fsdev/aio/fsdev_aio.o 00:04:38.613 CC module/accel/error/accel_error.o 00:04:38.613 CC module/accel/dsa/accel_dsa.o 00:04:38.613 CC module/keyring/file/keyring.o 00:04:38.613 CC module/blob/bdev/blob_bdev.o 00:04:38.613 LIB libspdk_env_dpdk_rpc.a 00:04:38.872 SO libspdk_env_dpdk_rpc.so.6.0 00:04:38.872 CC module/keyring/file/keyring_rpc.o 00:04:38.872 CC module/accel/ioat/accel_ioat_rpc.o 00:04:38.872 CC module/accel/error/accel_error_rpc.o 00:04:38.872 SYMLINK libspdk_env_dpdk_rpc.so 00:04:38.872 CC module/accel/dsa/accel_dsa_rpc.o 00:04:38.872 CC module/accel/iaa/accel_iaa_rpc.o 00:04:38.872 LIB libspdk_scheduler_dynamic.a 00:04:38.872 SO libspdk_scheduler_dynamic.so.4.0 00:04:38.872 LIB libspdk_keyring_file.a 00:04:38.872 LIB libspdk_accel_ioat.a 00:04:38.872 LIB libspdk_accel_error.a 00:04:38.872 SO libspdk_keyring_file.so.2.0 00:04:38.872 SO libspdk_accel_ioat.so.6.0 00:04:38.872 SO libspdk_accel_error.so.2.0 00:04:38.872 SYMLINK libspdk_scheduler_dynamic.so 00:04:38.872 LIB libspdk_accel_iaa.a 00:04:38.872 SYMLINK libspdk_keyring_file.so 00:04:38.872 LIB libspdk_accel_dsa.a 00:04:38.872 SYMLINK libspdk_accel_error.so 00:04:38.872 SYMLINK libspdk_accel_ioat.so 00:04:38.872 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:38.872 CC module/fsdev/aio/linux_aio_mgr.o 00:04:38.872 SO libspdk_accel_iaa.so.3.0 00:04:38.872 LIB libspdk_blob_bdev.a 00:04:38.872 SO libspdk_accel_dsa.so.5.0 00:04:39.131 SO libspdk_blob_bdev.so.11.0 00:04:39.131 SYMLINK libspdk_accel_dsa.so 00:04:39.131 SYMLINK libspdk_accel_iaa.so 00:04:39.131 SYMLINK libspdk_blob_bdev.so 00:04:39.131 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:39.131 CC module/scheduler/gscheduler/gscheduler.o 00:04:39.131 CC module/keyring/linux/keyring.o 00:04:39.131 CC module/keyring/linux/keyring_rpc.o 00:04:39.131 LIB libspdk_scheduler_dpdk_governor.a 00:04:39.131 LIB libspdk_scheduler_gscheduler.a 00:04:39.131 SO libspdk_scheduler_gscheduler.so.4.0 00:04:39.131 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:39.131 LIB libspdk_keyring_linux.a 00:04:39.131 SO libspdk_keyring_linux.so.1.0 00:04:39.131 SYMLINK libspdk_scheduler_gscheduler.so 00:04:39.131 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:39.390 SYMLINK libspdk_keyring_linux.so 00:04:39.390 LIB libspdk_sock_posix.a 00:04:39.390 CC module/blobfs/bdev/blobfs_bdev.o 00:04:39.390 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:39.390 CC module/bdev/delay/vbdev_delay.o 00:04:39.390 CC module/bdev/gpt/gpt.o 00:04:39.390 CC module/bdev/error/vbdev_error.o 00:04:39.390 SO libspdk_sock_posix.so.6.0 00:04:39.390 LIB libspdk_fsdev_aio.a 00:04:39.390 CC module/bdev/lvol/vbdev_lvol.o 00:04:39.390 SO libspdk_fsdev_aio.so.1.0 00:04:39.390 SYMLINK libspdk_sock_posix.so 00:04:39.390 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:39.390 CC module/bdev/null/bdev_null.o 00:04:39.390 CC module/bdev/malloc/bdev_malloc.o 00:04:39.390 SYMLINK libspdk_fsdev_aio.so 00:04:39.390 CC module/bdev/error/vbdev_error_rpc.o 00:04:39.390 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:39.390 LIB libspdk_blobfs_bdev.a 00:04:39.390 CC module/bdev/gpt/vbdev_gpt.o 00:04:39.390 SO libspdk_blobfs_bdev.so.6.0 00:04:39.649 SYMLINK libspdk_blobfs_bdev.so 00:04:39.649 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:39.649 CC module/bdev/null/bdev_null_rpc.o 00:04:39.649 LIB libspdk_bdev_error.a 00:04:39.649 SO libspdk_bdev_error.so.6.0 00:04:39.649 LIB libspdk_bdev_delay.a 00:04:39.649 SO libspdk_bdev_delay.so.6.0 00:04:39.649 SYMLINK libspdk_bdev_error.so 00:04:39.649 SYMLINK libspdk_bdev_delay.so 00:04:39.649 LIB libspdk_bdev_null.a 00:04:39.649 CC module/bdev/nvme/bdev_nvme.o 00:04:39.649 CC module/bdev/passthru/vbdev_passthru.o 00:04:39.649 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:39.649 SO libspdk_bdev_null.so.6.0 00:04:39.649 LIB libspdk_bdev_gpt.a 00:04:39.649 SO libspdk_bdev_gpt.so.6.0 00:04:39.649 LIB libspdk_bdev_malloc.a 00:04:39.649 LIB libspdk_bdev_lvol.a 00:04:39.908 SYMLINK libspdk_bdev_null.so 00:04:39.908 SO libspdk_bdev_malloc.so.6.0 00:04:39.908 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:39.908 SO libspdk_bdev_lvol.so.6.0 00:04:39.908 CC module/bdev/split/vbdev_split.o 00:04:39.908 CC module/bdev/raid/bdev_raid.o 00:04:39.908 SYMLINK libspdk_bdev_gpt.so 00:04:39.908 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:39.908 SYMLINK libspdk_bdev_malloc.so 00:04:39.908 SYMLINK libspdk_bdev_lvol.so 00:04:39.908 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:39.908 CC module/bdev/raid/bdev_raid_rpc.o 00:04:39.908 CC module/bdev/xnvme/bdev_xnvme.o 00:04:39.908 LIB libspdk_bdev_passthru.a 00:04:39.908 CC module/bdev/raid/bdev_raid_sb.o 00:04:39.908 CC module/bdev/aio/bdev_aio.o 00:04:39.908 CC module/bdev/split/vbdev_split_rpc.o 00:04:39.908 SO libspdk_bdev_passthru.so.6.0 00:04:40.166 SYMLINK libspdk_bdev_passthru.so 00:04:40.166 CC module/bdev/aio/bdev_aio_rpc.o 00:04:40.166 LIB libspdk_bdev_zone_block.a 00:04:40.166 LIB libspdk_bdev_split.a 00:04:40.166 SO libspdk_bdev_zone_block.so.6.0 00:04:40.166 SO libspdk_bdev_split.so.6.0 00:04:40.166 CC module/bdev/ftl/bdev_ftl.o 00:04:40.166 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:40.166 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:40.166 SYMLINK libspdk_bdev_zone_block.so 00:04:40.166 SYMLINK libspdk_bdev_split.so 00:04:40.166 CC module/bdev/nvme/nvme_rpc.o 00:04:40.166 LIB libspdk_bdev_aio.a 00:04:40.425 SO libspdk_bdev_aio.so.6.0 00:04:40.425 CC module/bdev/nvme/bdev_mdns_client.o 00:04:40.425 LIB libspdk_bdev_xnvme.a 00:04:40.425 SYMLINK libspdk_bdev_aio.so 00:04:40.425 CC module/bdev/raid/raid0.o 00:04:40.425 CC module/bdev/iscsi/bdev_iscsi.o 00:04:40.425 CC module/bdev/nvme/vbdev_opal.o 00:04:40.425 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:40.425 LIB libspdk_bdev_ftl.a 00:04:40.425 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:40.425 SO libspdk_bdev_ftl.so.6.0 00:04:40.425 SO libspdk_bdev_xnvme.so.3.0 00:04:40.425 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:40.425 SYMLINK libspdk_bdev_ftl.so 00:04:40.425 SYMLINK libspdk_bdev_xnvme.so 00:04:40.683 CC module/bdev/raid/raid1.o 00:04:40.683 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:40.683 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:40.683 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:40.683 CC module/bdev/raid/concat.o 00:04:40.683 LIB libspdk_bdev_iscsi.a 00:04:40.683 SO libspdk_bdev_iscsi.so.6.0 00:04:40.683 SYMLINK libspdk_bdev_iscsi.so 00:04:40.941 LIB libspdk_bdev_virtio.a 00:04:40.941 LIB libspdk_bdev_raid.a 00:04:40.941 SO libspdk_bdev_virtio.so.6.0 00:04:40.941 SO libspdk_bdev_raid.so.6.0 00:04:40.941 SYMLINK libspdk_bdev_virtio.so 00:04:40.941 SYMLINK libspdk_bdev_raid.so 00:04:42.318 LIB libspdk_bdev_nvme.a 00:04:42.318 SO libspdk_bdev_nvme.so.7.0 00:04:42.318 SYMLINK libspdk_bdev_nvme.so 00:04:42.576 CC module/event/subsystems/vmd/vmd.o 00:04:42.576 CC module/event/subsystems/keyring/keyring.o 00:04:42.576 CC module/event/subsystems/scheduler/scheduler.o 00:04:42.576 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:42.576 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:42.576 CC module/event/subsystems/iobuf/iobuf.o 00:04:42.576 CC module/event/subsystems/fsdev/fsdev.o 00:04:42.576 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:42.576 CC module/event/subsystems/sock/sock.o 00:04:42.836 LIB libspdk_event_fsdev.a 00:04:42.836 LIB libspdk_event_keyring.a 00:04:42.836 LIB libspdk_event_vhost_blk.a 00:04:42.836 LIB libspdk_event_scheduler.a 00:04:42.836 LIB libspdk_event_vmd.a 00:04:42.836 SO libspdk_event_fsdev.so.1.0 00:04:42.836 SO libspdk_event_keyring.so.1.0 00:04:42.836 SO libspdk_event_vhost_blk.so.3.0 00:04:42.836 SO libspdk_event_scheduler.so.4.0 00:04:42.836 LIB libspdk_event_sock.a 00:04:42.836 SO libspdk_event_vmd.so.6.0 00:04:42.836 LIB libspdk_event_iobuf.a 00:04:42.836 SO libspdk_event_sock.so.5.0 00:04:42.836 SYMLINK libspdk_event_vhost_blk.so 00:04:42.836 SO libspdk_event_iobuf.so.3.0 00:04:42.836 SYMLINK libspdk_event_scheduler.so 00:04:42.836 SYMLINK libspdk_event_keyring.so 00:04:42.836 SYMLINK libspdk_event_fsdev.so 00:04:42.836 SYMLINK libspdk_event_vmd.so 00:04:42.836 SYMLINK libspdk_event_sock.so 00:04:42.836 SYMLINK libspdk_event_iobuf.so 00:04:43.095 CC module/event/subsystems/accel/accel.o 00:04:43.095 LIB libspdk_event_accel.a 00:04:43.095 SO libspdk_event_accel.so.6.0 00:04:43.353 SYMLINK libspdk_event_accel.so 00:04:43.611 CC module/event/subsystems/bdev/bdev.o 00:04:43.611 LIB libspdk_event_bdev.a 00:04:43.611 SO libspdk_event_bdev.so.6.0 00:04:43.611 SYMLINK libspdk_event_bdev.so 00:04:43.904 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:43.904 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:43.904 CC module/event/subsystems/nbd/nbd.o 00:04:43.904 CC module/event/subsystems/ublk/ublk.o 00:04:43.904 CC module/event/subsystems/scsi/scsi.o 00:04:43.904 LIB libspdk_event_nbd.a 00:04:44.164 LIB libspdk_event_scsi.a 00:04:44.164 SO libspdk_event_nbd.so.6.0 00:04:44.164 LIB libspdk_event_ublk.a 00:04:44.164 SO libspdk_event_scsi.so.6.0 00:04:44.164 SO libspdk_event_ublk.so.3.0 00:04:44.164 SYMLINK libspdk_event_nbd.so 00:04:44.164 LIB libspdk_event_nvmf.a 00:04:44.164 SYMLINK libspdk_event_scsi.so 00:04:44.164 SYMLINK libspdk_event_ublk.so 00:04:44.164 SO libspdk_event_nvmf.so.6.0 00:04:44.164 SYMLINK libspdk_event_nvmf.so 00:04:44.164 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:44.164 CC module/event/subsystems/iscsi/iscsi.o 00:04:44.422 LIB libspdk_event_vhost_scsi.a 00:04:44.422 LIB libspdk_event_iscsi.a 00:04:44.422 SO libspdk_event_vhost_scsi.so.3.0 00:04:44.422 SO libspdk_event_iscsi.so.6.0 00:04:44.422 SYMLINK libspdk_event_vhost_scsi.so 00:04:44.423 SYMLINK libspdk_event_iscsi.so 00:04:44.681 SO libspdk.so.6.0 00:04:44.681 SYMLINK libspdk.so 00:04:44.681 TEST_HEADER include/spdk/accel.h 00:04:44.681 TEST_HEADER include/spdk/accel_module.h 00:04:44.681 CXX app/trace/trace.o 00:04:44.681 TEST_HEADER include/spdk/assert.h 00:04:44.681 TEST_HEADER include/spdk/barrier.h 00:04:44.681 TEST_HEADER include/spdk/base64.h 00:04:44.681 TEST_HEADER include/spdk/bdev.h 00:04:44.940 TEST_HEADER include/spdk/bdev_module.h 00:04:44.940 CC test/rpc_client/rpc_client_test.o 00:04:44.940 TEST_HEADER include/spdk/bdev_zone.h 00:04:44.940 TEST_HEADER include/spdk/bit_array.h 00:04:44.940 TEST_HEADER include/spdk/bit_pool.h 00:04:44.940 TEST_HEADER include/spdk/blob_bdev.h 00:04:44.940 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:44.940 TEST_HEADER include/spdk/blobfs.h 00:04:44.940 TEST_HEADER include/spdk/blob.h 00:04:44.940 TEST_HEADER include/spdk/conf.h 00:04:44.940 TEST_HEADER include/spdk/config.h 00:04:44.940 TEST_HEADER include/spdk/cpuset.h 00:04:44.940 TEST_HEADER include/spdk/crc16.h 00:04:44.940 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:44.940 TEST_HEADER include/spdk/crc32.h 00:04:44.940 TEST_HEADER include/spdk/crc64.h 00:04:44.940 TEST_HEADER include/spdk/dif.h 00:04:44.940 TEST_HEADER include/spdk/dma.h 00:04:44.940 TEST_HEADER include/spdk/endian.h 00:04:44.940 TEST_HEADER include/spdk/env_dpdk.h 00:04:44.940 TEST_HEADER include/spdk/env.h 00:04:44.940 TEST_HEADER include/spdk/event.h 00:04:44.940 TEST_HEADER include/spdk/fd_group.h 00:04:44.940 TEST_HEADER include/spdk/fd.h 00:04:44.940 TEST_HEADER include/spdk/file.h 00:04:44.940 TEST_HEADER include/spdk/fsdev.h 00:04:44.940 TEST_HEADER include/spdk/fsdev_module.h 00:04:44.940 TEST_HEADER include/spdk/ftl.h 00:04:44.940 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:44.940 TEST_HEADER include/spdk/gpt_spec.h 00:04:44.940 TEST_HEADER include/spdk/hexlify.h 00:04:44.940 TEST_HEADER include/spdk/histogram_data.h 00:04:44.940 TEST_HEADER include/spdk/idxd.h 00:04:44.940 TEST_HEADER include/spdk/idxd_spec.h 00:04:44.940 TEST_HEADER include/spdk/init.h 00:04:44.940 TEST_HEADER include/spdk/ioat.h 00:04:44.940 TEST_HEADER include/spdk/ioat_spec.h 00:04:44.940 CC examples/util/zipf/zipf.o 00:04:44.940 TEST_HEADER include/spdk/iscsi_spec.h 00:04:44.940 TEST_HEADER include/spdk/json.h 00:04:44.940 TEST_HEADER include/spdk/jsonrpc.h 00:04:44.940 TEST_HEADER include/spdk/keyring.h 00:04:44.940 TEST_HEADER include/spdk/keyring_module.h 00:04:44.940 CC examples/ioat/perf/perf.o 00:04:44.940 TEST_HEADER include/spdk/likely.h 00:04:44.940 CC test/thread/poller_perf/poller_perf.o 00:04:44.940 TEST_HEADER include/spdk/log.h 00:04:44.940 TEST_HEADER include/spdk/lvol.h 00:04:44.940 TEST_HEADER include/spdk/md5.h 00:04:44.940 TEST_HEADER include/spdk/memory.h 00:04:44.940 TEST_HEADER include/spdk/mmio.h 00:04:44.940 TEST_HEADER include/spdk/nbd.h 00:04:44.940 TEST_HEADER include/spdk/net.h 00:04:44.940 TEST_HEADER include/spdk/notify.h 00:04:44.940 TEST_HEADER include/spdk/nvme.h 00:04:44.940 TEST_HEADER include/spdk/nvme_intel.h 00:04:44.940 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:44.940 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:44.940 CC test/app/bdev_svc/bdev_svc.o 00:04:44.940 TEST_HEADER include/spdk/nvme_spec.h 00:04:44.940 TEST_HEADER include/spdk/nvme_zns.h 00:04:44.940 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:44.940 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:44.940 TEST_HEADER include/spdk/nvmf.h 00:04:44.940 TEST_HEADER include/spdk/nvmf_spec.h 00:04:44.940 TEST_HEADER include/spdk/nvmf_transport.h 00:04:44.940 CC test/dma/test_dma/test_dma.o 00:04:44.940 TEST_HEADER include/spdk/opal.h 00:04:44.940 TEST_HEADER include/spdk/opal_spec.h 00:04:44.940 TEST_HEADER include/spdk/pci_ids.h 00:04:44.940 TEST_HEADER include/spdk/pipe.h 00:04:44.940 TEST_HEADER include/spdk/queue.h 00:04:44.940 TEST_HEADER include/spdk/reduce.h 00:04:44.940 TEST_HEADER include/spdk/rpc.h 00:04:44.940 TEST_HEADER include/spdk/scheduler.h 00:04:44.940 CC test/env/mem_callbacks/mem_callbacks.o 00:04:44.940 TEST_HEADER include/spdk/scsi.h 00:04:44.940 TEST_HEADER include/spdk/scsi_spec.h 00:04:44.940 TEST_HEADER include/spdk/sock.h 00:04:44.940 TEST_HEADER include/spdk/stdinc.h 00:04:44.940 TEST_HEADER include/spdk/string.h 00:04:44.940 TEST_HEADER include/spdk/thread.h 00:04:44.940 TEST_HEADER include/spdk/trace.h 00:04:44.940 TEST_HEADER include/spdk/trace_parser.h 00:04:44.940 TEST_HEADER include/spdk/tree.h 00:04:44.940 TEST_HEADER include/spdk/ublk.h 00:04:44.940 TEST_HEADER include/spdk/util.h 00:04:44.940 TEST_HEADER include/spdk/uuid.h 00:04:44.940 TEST_HEADER include/spdk/version.h 00:04:44.940 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:44.940 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:44.940 TEST_HEADER include/spdk/vhost.h 00:04:44.940 TEST_HEADER include/spdk/vmd.h 00:04:44.940 TEST_HEADER include/spdk/xor.h 00:04:44.940 TEST_HEADER include/spdk/zipf.h 00:04:44.940 CXX test/cpp_headers/accel.o 00:04:44.940 LINK rpc_client_test 00:04:44.940 LINK zipf 00:04:44.940 LINK interrupt_tgt 00:04:44.940 LINK poller_perf 00:04:45.198 LINK ioat_perf 00:04:45.198 LINK mem_callbacks 00:04:45.198 LINK bdev_svc 00:04:45.198 CXX test/cpp_headers/accel_module.o 00:04:45.198 LINK spdk_trace 00:04:45.198 CC test/env/vtophys/vtophys.o 00:04:45.198 CC examples/ioat/verify/verify.o 00:04:45.198 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:45.198 CXX test/cpp_headers/assert.o 00:04:45.198 CC test/env/memory/memory_ut.o 00:04:45.198 CC test/event/event_perf/event_perf.o 00:04:45.198 LINK vtophys 00:04:45.456 LINK env_dpdk_post_init 00:04:45.456 CC app/trace_record/trace_record.o 00:04:45.456 CC examples/thread/thread/thread_ex.o 00:04:45.456 LINK test_dma 00:04:45.456 CXX test/cpp_headers/barrier.o 00:04:45.456 LINK event_perf 00:04:45.456 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:45.456 LINK verify 00:04:45.456 CC test/app/histogram_perf/histogram_perf.o 00:04:45.456 CXX test/cpp_headers/base64.o 00:04:45.714 CC test/event/reactor/reactor.o 00:04:45.714 CC app/nvmf_tgt/nvmf_main.o 00:04:45.714 LINK spdk_trace_record 00:04:45.714 LINK histogram_perf 00:04:45.714 LINK thread 00:04:45.714 CC app/iscsi_tgt/iscsi_tgt.o 00:04:45.714 CXX test/cpp_headers/bdev.o 00:04:45.714 LINK reactor 00:04:45.714 CC examples/sock/hello_world/hello_sock.o 00:04:45.714 LINK nvmf_tgt 00:04:45.714 CC test/env/pci/pci_ut.o 00:04:45.714 LINK nvme_fuzz 00:04:45.714 CXX test/cpp_headers/bdev_module.o 00:04:45.714 CC test/event/reactor_perf/reactor_perf.o 00:04:45.972 LINK iscsi_tgt 00:04:45.972 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:45.972 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:45.973 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:45.973 LINK reactor_perf 00:04:45.973 LINK hello_sock 00:04:45.973 CXX test/cpp_headers/bdev_zone.o 00:04:45.973 LINK memory_ut 00:04:45.973 CC app/spdk_tgt/spdk_tgt.o 00:04:45.973 CC app/spdk_lspci/spdk_lspci.o 00:04:46.231 CC app/spdk_nvme_perf/perf.o 00:04:46.231 CXX test/cpp_headers/bit_array.o 00:04:46.231 CC test/event/app_repeat/app_repeat.o 00:04:46.231 LINK pci_ut 00:04:46.231 LINK spdk_lspci 00:04:46.231 CC examples/vmd/lsvmd/lsvmd.o 00:04:46.231 LINK spdk_tgt 00:04:46.231 LINK vhost_fuzz 00:04:46.231 CXX test/cpp_headers/bit_pool.o 00:04:46.231 LINK app_repeat 00:04:46.231 CXX test/cpp_headers/blob_bdev.o 00:04:46.489 LINK lsvmd 00:04:46.489 CC test/accel/dif/dif.o 00:04:46.489 CXX test/cpp_headers/blobfs_bdev.o 00:04:46.489 CC examples/vmd/led/led.o 00:04:46.489 CC app/spdk_nvme_identify/identify.o 00:04:46.489 CC test/event/scheduler/scheduler.o 00:04:46.489 CC app/spdk_nvme_discover/discovery_aer.o 00:04:46.489 LINK led 00:04:46.489 CXX test/cpp_headers/blobfs.o 00:04:46.750 CC test/blobfs/mkfs/mkfs.o 00:04:46.750 CC test/lvol/esnap/esnap.o 00:04:46.750 LINK scheduler 00:04:46.750 CXX test/cpp_headers/blob.o 00:04:46.750 LINK spdk_nvme_discover 00:04:46.750 LINK mkfs 00:04:46.750 CXX test/cpp_headers/conf.o 00:04:47.011 CC examples/idxd/perf/perf.o 00:04:47.011 LINK spdk_nvme_perf 00:04:47.011 CC app/spdk_top/spdk_top.o 00:04:47.011 CXX test/cpp_headers/config.o 00:04:47.011 CC test/nvme/aer/aer.o 00:04:47.011 CXX test/cpp_headers/cpuset.o 00:04:47.011 LINK dif 00:04:47.011 CC test/nvme/reset/reset.o 00:04:47.271 CXX test/cpp_headers/crc16.o 00:04:47.271 CC app/vhost/vhost.o 00:04:47.271 LINK idxd_perf 00:04:47.271 LINK spdk_nvme_identify 00:04:47.271 LINK aer 00:04:47.271 CC test/nvme/sgl/sgl.o 00:04:47.271 CXX test/cpp_headers/crc32.o 00:04:47.271 LINK reset 00:04:47.587 LINK vhost 00:04:47.587 CXX test/cpp_headers/crc64.o 00:04:47.587 CC test/nvme/e2edp/nvme_dp.o 00:04:47.587 CC test/nvme/overhead/overhead.o 00:04:47.587 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:47.587 LINK iscsi_fuzz 00:04:47.587 CC test/nvme/err_injection/err_injection.o 00:04:47.587 LINK sgl 00:04:47.587 CC test/nvme/startup/startup.o 00:04:47.587 CXX test/cpp_headers/dif.o 00:04:47.847 LINK err_injection 00:04:47.847 CXX test/cpp_headers/dma.o 00:04:47.847 LINK startup 00:04:47.847 LINK nvme_dp 00:04:47.847 CC test/app/jsoncat/jsoncat.o 00:04:47.847 LINK overhead 00:04:47.847 LINK hello_fsdev 00:04:47.847 CC test/nvme/reserve/reserve.o 00:04:47.847 LINK spdk_top 00:04:47.847 CXX test/cpp_headers/endian.o 00:04:47.847 CC app/spdk_dd/spdk_dd.o 00:04:47.847 CXX test/cpp_headers/env_dpdk.o 00:04:47.847 LINK jsoncat 00:04:48.105 CC test/nvme/simple_copy/simple_copy.o 00:04:48.105 CXX test/cpp_headers/env.o 00:04:48.105 LINK reserve 00:04:48.105 CC app/fio/nvme/fio_plugin.o 00:04:48.105 CC test/app/stub/stub.o 00:04:48.105 CC examples/accel/perf/accel_perf.o 00:04:48.105 CC test/bdev/bdevio/bdevio.o 00:04:48.105 CC examples/blob/hello_world/hello_blob.o 00:04:48.105 CXX test/cpp_headers/event.o 00:04:48.365 LINK simple_copy 00:04:48.365 LINK stub 00:04:48.365 CC test/nvme/connect_stress/connect_stress.o 00:04:48.365 LINK spdk_dd 00:04:48.365 CXX test/cpp_headers/fd_group.o 00:04:48.365 CXX test/cpp_headers/fd.o 00:04:48.365 LINK hello_blob 00:04:48.365 CXX test/cpp_headers/file.o 00:04:48.365 LINK connect_stress 00:04:48.626 LINK accel_perf 00:04:48.626 CC examples/blob/cli/blobcli.o 00:04:48.626 CXX test/cpp_headers/fsdev.o 00:04:48.626 CC test/nvme/boot_partition/boot_partition.o 00:04:48.626 CXX test/cpp_headers/fsdev_module.o 00:04:48.626 LINK bdevio 00:04:48.626 CC test/nvme/compliance/nvme_compliance.o 00:04:48.626 CC test/nvme/fused_ordering/fused_ordering.o 00:04:48.626 CXX test/cpp_headers/ftl.o 00:04:48.626 LINK spdk_nvme 00:04:48.626 CXX test/cpp_headers/fuse_dispatcher.o 00:04:48.626 LINK boot_partition 00:04:48.626 CXX test/cpp_headers/gpt_spec.o 00:04:48.903 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:48.903 LINK fused_ordering 00:04:48.903 CXX test/cpp_headers/hexlify.o 00:04:48.903 LINK nvme_compliance 00:04:48.903 CC test/nvme/fdp/fdp.o 00:04:48.903 CC app/fio/bdev/fio_plugin.o 00:04:48.903 CXX test/cpp_headers/histogram_data.o 00:04:48.903 CC test/nvme/cuse/cuse.o 00:04:48.903 CXX test/cpp_headers/idxd.o 00:04:48.903 LINK doorbell_aers 00:04:48.903 LINK blobcli 00:04:48.903 CXX test/cpp_headers/idxd_spec.o 00:04:49.162 CC examples/nvme/hello_world/hello_world.o 00:04:49.162 LINK fdp 00:04:49.162 CC examples/nvme/reconnect/reconnect.o 00:04:49.162 CXX test/cpp_headers/init.o 00:04:49.162 CC examples/bdev/hello_world/hello_bdev.o 00:04:49.162 CC examples/bdev/bdevperf/bdevperf.o 00:04:49.162 CXX test/cpp_headers/ioat.o 00:04:49.162 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:49.420 LINK hello_world 00:04:49.420 CXX test/cpp_headers/ioat_spec.o 00:04:49.420 LINK spdk_bdev 00:04:49.420 CXX test/cpp_headers/iscsi_spec.o 00:04:49.420 LINK hello_bdev 00:04:49.420 CXX test/cpp_headers/json.o 00:04:49.420 LINK reconnect 00:04:49.420 CC examples/nvme/arbitration/arbitration.o 00:04:49.420 CXX test/cpp_headers/jsonrpc.o 00:04:49.420 CC examples/nvme/hotplug/hotplug.o 00:04:49.420 CXX test/cpp_headers/keyring.o 00:04:49.679 CXX test/cpp_headers/keyring_module.o 00:04:49.679 CXX test/cpp_headers/likely.o 00:04:49.679 LINK nvme_manage 00:04:49.679 CXX test/cpp_headers/log.o 00:04:49.679 CXX test/cpp_headers/lvol.o 00:04:49.679 CXX test/cpp_headers/md5.o 00:04:49.679 CXX test/cpp_headers/memory.o 00:04:49.679 LINK hotplug 00:04:49.679 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:49.679 CC examples/nvme/abort/abort.o 00:04:49.679 CXX test/cpp_headers/mmio.o 00:04:49.679 LINK bdevperf 00:04:49.937 LINK arbitration 00:04:49.937 CXX test/cpp_headers/nbd.o 00:04:49.937 CXX test/cpp_headers/net.o 00:04:49.937 CXX test/cpp_headers/notify.o 00:04:49.937 LINK cmb_copy 00:04:49.937 CXX test/cpp_headers/nvme.o 00:04:49.937 CXX test/cpp_headers/nvme_intel.o 00:04:49.937 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:49.937 CXX test/cpp_headers/nvme_ocssd.o 00:04:49.937 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:49.937 CXX test/cpp_headers/nvme_spec.o 00:04:49.937 CXX test/cpp_headers/nvme_zns.o 00:04:49.937 CXX test/cpp_headers/nvmf_cmd.o 00:04:50.195 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:50.195 LINK pmr_persistence 00:04:50.195 LINK cuse 00:04:50.195 CXX test/cpp_headers/nvmf.o 00:04:50.195 CXX test/cpp_headers/nvmf_spec.o 00:04:50.195 CXX test/cpp_headers/nvmf_transport.o 00:04:50.195 LINK abort 00:04:50.195 CXX test/cpp_headers/opal.o 00:04:50.195 CXX test/cpp_headers/opal_spec.o 00:04:50.195 CXX test/cpp_headers/pci_ids.o 00:04:50.195 CXX test/cpp_headers/pipe.o 00:04:50.195 CXX test/cpp_headers/queue.o 00:04:50.195 CXX test/cpp_headers/reduce.o 00:04:50.195 CXX test/cpp_headers/rpc.o 00:04:50.195 CXX test/cpp_headers/scheduler.o 00:04:50.195 CXX test/cpp_headers/scsi.o 00:04:50.195 CXX test/cpp_headers/scsi_spec.o 00:04:50.453 CXX test/cpp_headers/sock.o 00:04:50.453 CXX test/cpp_headers/stdinc.o 00:04:50.453 CXX test/cpp_headers/string.o 00:04:50.453 CC examples/nvmf/nvmf/nvmf.o 00:04:50.453 CXX test/cpp_headers/thread.o 00:04:50.453 CXX test/cpp_headers/trace.o 00:04:50.453 CXX test/cpp_headers/trace_parser.o 00:04:50.453 CXX test/cpp_headers/tree.o 00:04:50.453 CXX test/cpp_headers/ublk.o 00:04:50.453 CXX test/cpp_headers/util.o 00:04:50.453 CXX test/cpp_headers/uuid.o 00:04:50.453 CXX test/cpp_headers/version.o 00:04:50.453 CXX test/cpp_headers/vfio_user_pci.o 00:04:50.453 CXX test/cpp_headers/vfio_user_spec.o 00:04:50.453 CXX test/cpp_headers/vhost.o 00:04:50.453 CXX test/cpp_headers/vmd.o 00:04:50.453 CXX test/cpp_headers/xor.o 00:04:50.453 CXX test/cpp_headers/zipf.o 00:04:50.711 LINK nvmf 00:04:51.650 LINK esnap 00:04:51.911 00:04:51.911 real 1m1.585s 00:04:51.911 user 5m14.579s 00:04:51.911 sys 0m53.068s 00:04:51.911 23:53:42 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:51.911 23:53:42 make -- common/autotest_common.sh@10 -- $ set +x 00:04:51.911 ************************************ 00:04:51.911 END TEST make 00:04:51.911 ************************************ 00:04:51.911 23:53:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:51.911 23:53:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:51.911 23:53:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:51.911 23:53:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.911 23:53:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:51.911 23:53:42 -- pm/common@44 -- $ pid=5797 00:04:51.911 23:53:42 -- pm/common@50 -- $ kill -TERM 5797 00:04:51.911 23:53:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.911 23:53:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:51.911 23:53:42 -- pm/common@44 -- $ pid=5798 00:04:51.911 23:53:42 -- pm/common@50 -- $ kill -TERM 5798 00:04:51.911 23:53:42 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:51.911 23:53:42 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:51.911 23:53:42 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:52.170 23:53:42 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:52.170 23:53:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.170 23:53:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.170 23:53:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.170 23:53:42 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.170 23:53:42 -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.170 23:53:42 -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.170 23:53:42 -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.170 23:53:42 -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.170 23:53:42 -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.170 23:53:42 -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.170 23:53:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.170 23:53:42 -- scripts/common.sh@344 -- # case "$op" in 00:04:52.170 23:53:42 -- scripts/common.sh@345 -- # : 1 00:04:52.170 23:53:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.170 23:53:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.170 23:53:42 -- scripts/common.sh@365 -- # decimal 1 00:04:52.170 23:53:42 -- scripts/common.sh@353 -- # local d=1 00:04:52.170 23:53:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.170 23:53:42 -- scripts/common.sh@355 -- # echo 1 00:04:52.170 23:53:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.170 23:53:42 -- scripts/common.sh@366 -- # decimal 2 00:04:52.170 23:53:42 -- scripts/common.sh@353 -- # local d=2 00:04:52.170 23:53:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.170 23:53:42 -- scripts/common.sh@355 -- # echo 2 00:04:52.170 23:53:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.170 23:53:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.170 23:53:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.170 23:53:42 -- scripts/common.sh@368 -- # return 0 00:04:52.170 23:53:42 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.170 23:53:42 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:52.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.170 --rc genhtml_branch_coverage=1 00:04:52.170 --rc genhtml_function_coverage=1 00:04:52.170 --rc genhtml_legend=1 00:04:52.170 --rc geninfo_all_blocks=1 00:04:52.170 --rc geninfo_unexecuted_blocks=1 00:04:52.170 00:04:52.170 ' 00:04:52.170 23:53:42 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:52.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.170 --rc genhtml_branch_coverage=1 00:04:52.170 --rc genhtml_function_coverage=1 00:04:52.170 --rc genhtml_legend=1 00:04:52.170 --rc geninfo_all_blocks=1 00:04:52.170 --rc geninfo_unexecuted_blocks=1 00:04:52.170 00:04:52.170 ' 00:04:52.170 23:53:42 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:52.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.170 --rc genhtml_branch_coverage=1 00:04:52.170 --rc genhtml_function_coverage=1 00:04:52.170 --rc genhtml_legend=1 00:04:52.170 --rc geninfo_all_blocks=1 00:04:52.170 --rc geninfo_unexecuted_blocks=1 00:04:52.170 00:04:52.170 ' 00:04:52.170 23:53:42 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:52.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.170 --rc genhtml_branch_coverage=1 00:04:52.170 --rc genhtml_function_coverage=1 00:04:52.170 --rc genhtml_legend=1 00:04:52.170 --rc geninfo_all_blocks=1 00:04:52.170 --rc geninfo_unexecuted_blocks=1 00:04:52.170 00:04:52.170 ' 00:04:52.170 23:53:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:52.171 23:53:42 -- nvmf/common.sh@7 -- # uname -s 00:04:52.171 23:53:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.171 23:53:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.171 23:53:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.171 23:53:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.171 23:53:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.171 23:53:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.171 23:53:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.171 23:53:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.171 23:53:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.171 23:53:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.171 23:53:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15ef5cdc-f81d-4909-b4f0-e2a4a086d794 00:04:52.171 23:53:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=15ef5cdc-f81d-4909-b4f0-e2a4a086d794 00:04:52.171 23:53:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.171 23:53:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.171 23:53:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.171 23:53:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.171 23:53:42 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:52.171 23:53:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.171 23:53:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.171 23:53:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.171 23:53:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.171 23:53:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.171 23:53:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.171 23:53:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.171 23:53:42 -- paths/export.sh@5 -- # export PATH 00:04:52.171 23:53:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.171 23:53:42 -- nvmf/common.sh@51 -- # : 0 00:04:52.171 23:53:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.171 23:53:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.171 23:53:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.171 23:53:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.171 23:53:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.171 23:53:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.171 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.171 23:53:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.171 23:53:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.171 23:53:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.171 23:53:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:52.171 23:53:42 -- spdk/autotest.sh@32 -- # uname -s 00:04:52.171 23:53:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:52.171 23:53:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:52.171 23:53:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.171 23:53:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:52.171 23:53:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.171 23:53:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:52.171 23:53:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:52.171 23:53:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:52.171 23:53:42 -- spdk/autotest.sh@48 -- # udevadm_pid=66599 00:04:52.171 23:53:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:52.171 23:53:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:52.171 23:53:42 -- pm/common@17 -- # local monitor 00:04:52.171 23:53:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.171 23:53:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.171 23:53:42 -- pm/common@25 -- # sleep 1 00:04:52.171 23:53:42 -- pm/common@21 -- # date +%s 00:04:52.171 23:53:42 -- pm/common@21 -- # date +%s 00:04:52.171 23:53:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732146822 00:04:52.171 23:53:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732146822 00:04:52.171 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732146822_collect-cpu-load.pm.log 00:04:52.171 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732146822_collect-vmstat.pm.log 00:04:53.110 23:53:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:53.110 23:53:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:53.110 23:53:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.110 23:53:43 -- common/autotest_common.sh@10 -- # set +x 00:04:53.110 23:53:43 -- spdk/autotest.sh@59 -- # create_test_list 00:04:53.110 23:53:43 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:53.110 23:53:43 -- common/autotest_common.sh@10 -- # set +x 00:04:53.110 23:53:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:53.110 23:53:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:53.110 23:53:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:53.110 23:53:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:53.110 23:53:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:53.110 23:53:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:53.110 23:53:43 -- common/autotest_common.sh@1455 -- # uname 00:04:53.110 23:53:43 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:53.110 23:53:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:53.110 23:53:43 -- common/autotest_common.sh@1475 -- # uname 00:04:53.110 23:53:43 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:53.110 23:53:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:53.110 23:53:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:53.402 lcov: LCOV version 1.15 00:04:53.402 23:53:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:08.309 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:08.309 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:23.245 23:54:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:23.245 23:54:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.245 23:54:13 -- common/autotest_common.sh@10 -- # set +x 00:05:23.245 23:54:13 -- spdk/autotest.sh@78 -- # rm -f 00:05:23.245 23:54:13 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.105 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:24.105 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:24.105 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:24.105 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:24.105 23:54:14 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:24.105 23:54:14 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:24.105 23:54:14 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:24.105 23:54:14 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:24.105 23:54:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:24.105 23:54:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:24.105 23:54:14 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:24.105 23:54:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:24.105 23:54:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:24.105 23:54:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:24.105 23:54:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:24.105 23:54:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:24.105 23:54:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:24.105 23:54:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:24.105 23:54:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:24.105 23:54:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:24.105 23:54:14 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:24.105 23:54:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:24.105 23:54:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:24.105 23:54:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:24.105 23:54:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:24.105 23:54:14 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:24.105 23:54:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:24.105 23:54:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:24.105 23:54:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:24.105 23:54:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:24.105 23:54:14 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:24.105 23:54:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:24.105 23:54:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:24.105 23:54:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:24.105 23:54:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:24.105 23:54:14 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:24.105 23:54:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:24.105 23:54:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:24.105 23:54:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:24.105 23:54:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:24.105 23:54:14 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:24.105 23:54:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:24.105 23:54:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:24.105 23:54:14 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:24.105 23:54:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.105 23:54:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.105 23:54:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:24.105 23:54:14 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:24.105 23:54:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:24.366 No valid GPT data, bailing 00:05:24.366 23:54:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:24.366 23:54:14 -- scripts/common.sh@394 -- # pt= 00:05:24.366 23:54:14 -- scripts/common.sh@395 -- # return 1 00:05:24.366 23:54:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:24.366 1+0 records in 00:05:24.366 1+0 records out 00:05:24.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274153 s, 38.2 MB/s 00:05:24.366 23:54:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.366 23:54:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.366 23:54:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:24.366 23:54:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:24.366 23:54:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:24.366 No valid GPT data, bailing 00:05:24.366 23:54:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:24.366 23:54:14 -- scripts/common.sh@394 -- # pt= 00:05:24.366 23:54:14 -- scripts/common.sh@395 -- # return 1 00:05:24.366 23:54:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:24.366 1+0 records in 00:05:24.366 1+0 records out 00:05:24.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00657359 s, 160 MB/s 00:05:24.366 23:54:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.366 23:54:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.366 23:54:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:24.366 23:54:14 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:24.366 23:54:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:24.366 No valid GPT data, bailing 00:05:24.366 23:54:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:24.366 23:54:14 -- scripts/common.sh@394 -- # pt= 00:05:24.366 23:54:14 -- scripts/common.sh@395 -- # return 1 00:05:24.366 23:54:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:24.366 1+0 records in 00:05:24.366 1+0 records out 00:05:24.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475885 s, 220 MB/s 00:05:24.366 23:54:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.366 23:54:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.366 23:54:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:24.366 23:54:14 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:24.366 23:54:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:24.629 No valid GPT data, bailing 00:05:24.629 23:54:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:24.629 23:54:14 -- scripts/common.sh@394 -- # pt= 00:05:24.629 23:54:14 -- scripts/common.sh@395 -- # return 1 00:05:24.629 23:54:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:24.629 1+0 records in 00:05:24.629 1+0 records out 00:05:24.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596537 s, 176 MB/s 00:05:24.629 23:54:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.629 23:54:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.629 23:54:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:24.629 23:54:14 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:24.629 23:54:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:24.629 No valid GPT data, bailing 00:05:24.629 23:54:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:24.629 23:54:14 -- scripts/common.sh@394 -- # pt= 00:05:24.629 23:54:14 -- scripts/common.sh@395 -- # return 1 00:05:24.629 23:54:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:24.629 1+0 records in 00:05:24.629 1+0 records out 00:05:24.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00585552 s, 179 MB/s 00:05:24.629 23:54:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.629 23:54:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.629 23:54:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:24.629 23:54:14 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:24.629 23:54:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:24.629 No valid GPT data, bailing 00:05:24.629 23:54:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:24.629 23:54:15 -- scripts/common.sh@394 -- # pt= 00:05:24.629 23:54:15 -- scripts/common.sh@395 -- # return 1 00:05:24.629 23:54:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:24.629 1+0 records in 00:05:24.629 1+0 records out 00:05:24.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483546 s, 217 MB/s 00:05:24.629 23:54:15 -- spdk/autotest.sh@105 -- # sync 00:05:24.890 23:54:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:24.890 23:54:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:24.890 23:54:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:26.277 23:54:16 -- spdk/autotest.sh@111 -- # uname -s 00:05:26.277 23:54:16 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:26.277 23:54:16 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:26.277 23:54:16 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:26.851 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.425 Hugepages 00:05:27.425 node hugesize free / total 00:05:27.425 node0 1048576kB 0 / 0 00:05:27.425 node0 2048kB 0 / 0 00:05:27.425 00:05:27.425 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:27.425 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:27.425 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:27.425 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:27.687 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:27.687 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:27.687 23:54:17 -- spdk/autotest.sh@117 -- # uname -s 00:05:27.687 23:54:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:27.687 23:54:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:27.687 23:54:17 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:28.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.835 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.835 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.835 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.835 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.835 23:54:19 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:29.778 23:54:20 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:29.778 23:54:20 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:29.778 23:54:20 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:29.778 23:54:20 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:29.778 23:54:20 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:29.778 23:54:20 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:29.778 23:54:20 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.778 23:54:20 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:29.778 23:54:20 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:29.778 23:54:20 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:29.778 23:54:20 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:29.778 23:54:20 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.300 Waiting for block devices as requested 00:05:30.300 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:30.562 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:30.562 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:30.562 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:35.866 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:35.866 23:54:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:35.866 23:54:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:35.866 23:54:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:35.866 23:54:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:35.866 23:54:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:35.866 23:54:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:35.866 23:54:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:35.866 23:54:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:35.866 23:54:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:35.866 23:54:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:35.866 23:54:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:35.866 23:54:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:35.866 23:54:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:35.866 23:54:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:35.866 23:54:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:35.866 23:54:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:35.866 23:54:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:35.866 23:54:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:35.866 23:54:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:35.866 23:54:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:35.866 23:54:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:35.866 23:54:26 -- common/autotest_common.sh@1541 -- # continue 00:05:35.866 23:54:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:35.866 23:54:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:35.866 23:54:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:35.866 23:54:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:35.866 23:54:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:35.866 23:54:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:35.867 23:54:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:35.867 23:54:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:35.867 23:54:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:35.867 23:54:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:35.867 23:54:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:35.867 23:54:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:35.867 23:54:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:35.867 23:54:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:35.867 23:54:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:35.867 23:54:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:35.867 23:54:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:35.867 23:54:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:35.867 23:54:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:35.867 23:54:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:35.867 23:54:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:35.867 23:54:26 -- common/autotest_common.sh@1541 -- # continue 00:05:35.867 23:54:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:35.867 23:54:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:35.867 23:54:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:35.867 23:54:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:35.867 23:54:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:35.867 23:54:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:35.867 23:54:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:35.867 23:54:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:35.867 23:54:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:35.867 23:54:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:35.867 23:54:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:35.867 23:54:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:35.867 23:54:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:35.867 23:54:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:35.867 23:54:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:35.867 23:54:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:35.867 23:54:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:35.867 23:54:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:35.867 23:54:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:35.867 23:54:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:35.867 23:54:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:35.867 23:54:26 -- common/autotest_common.sh@1541 -- # continue 00:05:35.867 23:54:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:35.867 23:54:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:35.867 23:54:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:35.867 23:54:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:35.867 23:54:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:35.867 23:54:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:35.867 23:54:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:35.867 23:54:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:35.867 23:54:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:35.867 23:54:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:35.867 23:54:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:35.867 23:54:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:35.867 23:54:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:35.867 23:54:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:35.867 23:54:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:35.867 23:54:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:35.867 23:54:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:35.867 23:54:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:35.867 23:54:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:35.867 23:54:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:35.867 23:54:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:35.867 23:54:26 -- common/autotest_common.sh@1541 -- # continue 00:05:35.867 23:54:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:35.867 23:54:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:35.867 23:54:26 -- common/autotest_common.sh@10 -- # set +x 00:05:35.867 23:54:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:35.867 23:54:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.867 23:54:26 -- common/autotest_common.sh@10 -- # set +x 00:05:35.867 23:54:26 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.701 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.701 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.701 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.701 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.701 23:54:27 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:36.701 23:54:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.701 23:54:27 -- common/autotest_common.sh@10 -- # set +x 00:05:36.701 23:54:27 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:36.701 23:54:27 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:36.701 23:54:27 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:36.701 23:54:27 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:36.964 23:54:27 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:36.964 23:54:27 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:36.964 23:54:27 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:36.964 23:54:27 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:36.964 23:54:27 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:36.964 23:54:27 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:36.964 23:54:27 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.964 23:54:27 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.964 23:54:27 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:36.964 23:54:27 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:36.964 23:54:27 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:36.964 23:54:27 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:36.964 23:54:27 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:36.964 23:54:27 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:36.964 23:54:27 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.964 23:54:27 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:36.964 23:54:27 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:36.964 23:54:27 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:36.964 23:54:27 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.964 23:54:27 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:36.964 23:54:27 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:36.964 23:54:27 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:36.964 23:54:27 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.964 23:54:27 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:36.964 23:54:27 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:36.964 23:54:27 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:36.964 23:54:27 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.964 23:54:27 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:36.964 23:54:27 -- common/autotest_common.sh@1570 -- # return 0 00:05:36.964 23:54:27 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:36.964 23:54:27 -- common/autotest_common.sh@1578 -- # return 0 00:05:36.964 23:54:27 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:36.964 23:54:27 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:36.964 23:54:27 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:36.964 23:54:27 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:36.964 23:54:27 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:36.964 23:54:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.964 23:54:27 -- common/autotest_common.sh@10 -- # set +x 00:05:36.964 23:54:27 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:36.964 23:54:27 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:36.964 23:54:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.964 23:54:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.964 23:54:27 -- common/autotest_common.sh@10 -- # set +x 00:05:36.964 ************************************ 00:05:36.964 START TEST env 00:05:36.964 ************************************ 00:05:36.964 23:54:27 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:36.964 * Looking for test storage... 00:05:36.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:36.964 23:54:27 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:36.964 23:54:27 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:36.964 23:54:27 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:36.964 23:54:27 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:36.964 23:54:27 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.964 23:54:27 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.964 23:54:27 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.964 23:54:27 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.964 23:54:27 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.964 23:54:27 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.964 23:54:27 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.964 23:54:27 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.964 23:54:27 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.964 23:54:27 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.964 23:54:27 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.964 23:54:27 env -- scripts/common.sh@344 -- # case "$op" in 00:05:36.964 23:54:27 env -- scripts/common.sh@345 -- # : 1 00:05:36.964 23:54:27 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.964 23:54:27 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.964 23:54:27 env -- scripts/common.sh@365 -- # decimal 1 00:05:36.964 23:54:27 env -- scripts/common.sh@353 -- # local d=1 00:05:36.964 23:54:27 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.964 23:54:27 env -- scripts/common.sh@355 -- # echo 1 00:05:36.964 23:54:27 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.964 23:54:27 env -- scripts/common.sh@366 -- # decimal 2 00:05:36.964 23:54:27 env -- scripts/common.sh@353 -- # local d=2 00:05:36.964 23:54:27 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.964 23:54:27 env -- scripts/common.sh@355 -- # echo 2 00:05:36.964 23:54:27 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.964 23:54:27 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.964 23:54:27 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.964 23:54:27 env -- scripts/common.sh@368 -- # return 0 00:05:36.964 23:54:27 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.964 23:54:27 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:36.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.964 --rc genhtml_branch_coverage=1 00:05:36.964 --rc genhtml_function_coverage=1 00:05:36.964 --rc genhtml_legend=1 00:05:36.964 --rc geninfo_all_blocks=1 00:05:36.964 --rc geninfo_unexecuted_blocks=1 00:05:36.964 00:05:36.964 ' 00:05:36.964 23:54:27 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:36.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.964 --rc genhtml_branch_coverage=1 00:05:36.964 --rc genhtml_function_coverage=1 00:05:36.964 --rc genhtml_legend=1 00:05:36.964 --rc geninfo_all_blocks=1 00:05:36.964 --rc geninfo_unexecuted_blocks=1 00:05:36.964 00:05:36.964 ' 00:05:36.964 23:54:27 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:36.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.964 --rc genhtml_branch_coverage=1 00:05:36.964 --rc genhtml_function_coverage=1 00:05:36.964 --rc genhtml_legend=1 00:05:36.964 --rc geninfo_all_blocks=1 00:05:36.964 --rc geninfo_unexecuted_blocks=1 00:05:36.964 00:05:36.964 ' 00:05:36.964 23:54:27 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:36.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.964 --rc genhtml_branch_coverage=1 00:05:36.964 --rc genhtml_function_coverage=1 00:05:36.964 --rc genhtml_legend=1 00:05:36.964 --rc geninfo_all_blocks=1 00:05:36.964 --rc geninfo_unexecuted_blocks=1 00:05:36.964 00:05:36.964 ' 00:05:36.964 23:54:27 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:36.964 23:54:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.964 23:54:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.964 23:54:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.964 ************************************ 00:05:36.964 START TEST env_memory 00:05:36.964 ************************************ 00:05:36.964 23:54:27 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:36.964 00:05:36.964 00:05:36.964 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.964 http://cunit.sourceforge.net/ 00:05:36.964 00:05:36.964 00:05:36.964 Suite: memory 00:05:37.226 Test: alloc and free memory map ...[2024-11-20 23:54:27.412160] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:37.226 passed 00:05:37.226 Test: mem map translation ...[2024-11-20 23:54:27.451608] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:37.226 [2024-11-20 23:54:27.451717] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:37.226 [2024-11-20 23:54:27.451827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:37.226 [2024-11-20 23:54:27.452156] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:37.226 passed 00:05:37.226 Test: mem map registration ...[2024-11-20 23:54:27.525691] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:37.226 [2024-11-20 23:54:27.525797] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:37.226 passed 00:05:37.226 Test: mem map adjacent registrations ...passed 00:05:37.226 00:05:37.226 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.226 suites 1 1 n/a 0 0 00:05:37.226 tests 4 4 4 0 0 00:05:37.226 asserts 152 152 152 0 n/a 00:05:37.226 00:05:37.226 Elapsed time = 0.240 seconds 00:05:37.226 00:05:37.226 real 0m0.266s 00:05:37.226 user 0m0.243s 00:05:37.226 sys 0m0.015s 00:05:37.226 23:54:27 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.226 23:54:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:37.226 ************************************ 00:05:37.226 END TEST env_memory 00:05:37.226 ************************************ 00:05:37.487 23:54:27 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:37.487 23:54:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.488 23:54:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.488 23:54:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.488 ************************************ 00:05:37.488 START TEST env_vtophys 00:05:37.488 ************************************ 00:05:37.488 23:54:27 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:37.488 EAL: lib.eal log level changed from notice to debug 00:05:37.488 EAL: Detected lcore 0 as core 0 on socket 0 00:05:37.488 EAL: Detected lcore 1 as core 0 on socket 0 00:05:37.488 EAL: Detected lcore 2 as core 0 on socket 0 00:05:37.488 EAL: Detected lcore 3 as core 0 on socket 0 00:05:37.488 EAL: Detected lcore 4 as core 0 on socket 0 00:05:37.488 EAL: Detected lcore 5 as core 0 on socket 0 00:05:37.488 EAL: Detected lcore 6 as core 0 on socket 0 00:05:37.488 EAL: Detected lcore 7 as core 0 on socket 0 00:05:37.488 EAL: Detected lcore 8 as core 0 on socket 0 00:05:37.488 EAL: Detected lcore 9 as core 0 on socket 0 00:05:37.488 EAL: Maximum logical cores by configuration: 128 00:05:37.488 EAL: Detected CPU lcores: 10 00:05:37.488 EAL: Detected NUMA nodes: 1 00:05:37.488 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:37.488 EAL: Detected shared linkage of DPDK 00:05:37.488 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:37.488 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:37.488 EAL: Registered [vdev] bus. 00:05:37.488 EAL: bus.vdev log level changed from disabled to notice 00:05:37.488 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:37.488 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:37.488 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:37.488 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:37.488 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:37.488 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:37.488 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:37.488 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:37.488 EAL: No shared files mode enabled, IPC will be disabled 00:05:37.488 EAL: No shared files mode enabled, IPC is disabled 00:05:37.488 EAL: Selected IOVA mode 'PA' 00:05:37.488 EAL: Probing VFIO support... 00:05:37.488 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:37.488 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:37.488 EAL: Ask a virtual area of 0x2e000 bytes 00:05:37.488 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:37.488 EAL: Setting up physically contiguous memory... 00:05:37.488 EAL: Setting maximum number of open files to 524288 00:05:37.488 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:37.488 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:37.488 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.488 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:37.488 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.488 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.488 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:37.488 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:37.488 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.488 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:37.488 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.488 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.488 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:37.488 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:37.488 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.488 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:37.488 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.488 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.488 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:37.488 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:37.488 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.488 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:37.488 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.488 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.488 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:37.488 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:37.488 EAL: Hugepages will be freed exactly as allocated. 00:05:37.488 EAL: No shared files mode enabled, IPC is disabled 00:05:37.488 EAL: No shared files mode enabled, IPC is disabled 00:05:37.488 EAL: TSC frequency is ~2600000 KHz 00:05:37.488 EAL: Main lcore 0 is ready (tid=7f5021520a40;cpuset=[0]) 00:05:37.488 EAL: Trying to obtain current memory policy. 00:05:37.488 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.488 EAL: Restoring previous memory policy: 0 00:05:37.488 EAL: request: mp_malloc_sync 00:05:37.488 EAL: No shared files mode enabled, IPC is disabled 00:05:37.488 EAL: Heap on socket 0 was expanded by 2MB 00:05:37.488 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:37.488 EAL: No shared files mode enabled, IPC is disabled 00:05:37.488 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:37.488 EAL: Mem event callback 'spdk:(nil)' registered 00:05:37.488 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:37.488 00:05:37.488 00:05:37.488 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.488 http://cunit.sourceforge.net/ 00:05:37.488 00:05:37.488 00:05:37.488 Suite: components_suite 00:05:37.749 Test: vtophys_malloc_test ...passed 00:05:37.749 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:37.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.749 EAL: Restoring previous memory policy: 4 00:05:37.749 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.749 EAL: request: mp_malloc_sync 00:05:37.749 EAL: No shared files mode enabled, IPC is disabled 00:05:37.749 EAL: Heap on socket 0 was expanded by 4MB 00:05:37.749 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.749 EAL: request: mp_malloc_sync 00:05:37.749 EAL: No shared files mode enabled, IPC is disabled 00:05:37.749 EAL: Heap on socket 0 was shrunk by 4MB 00:05:37.749 EAL: Trying to obtain current memory policy. 00:05:37.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.749 EAL: Restoring previous memory policy: 4 00:05:37.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.750 EAL: request: mp_malloc_sync 00:05:37.750 EAL: No shared files mode enabled, IPC is disabled 00:05:37.750 EAL: Heap on socket 0 was expanded by 6MB 00:05:37.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.750 EAL: request: mp_malloc_sync 00:05:37.750 EAL: No shared files mode enabled, IPC is disabled 00:05:37.750 EAL: Heap on socket 0 was shrunk by 6MB 00:05:37.750 EAL: Trying to obtain current memory policy. 00:05:37.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.750 EAL: Restoring previous memory policy: 4 00:05:37.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.750 EAL: request: mp_malloc_sync 00:05:37.750 EAL: No shared files mode enabled, IPC is disabled 00:05:37.750 EAL: Heap on socket 0 was expanded by 10MB 00:05:37.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.750 EAL: request: mp_malloc_sync 00:05:37.750 EAL: No shared files mode enabled, IPC is disabled 00:05:37.750 EAL: Heap on socket 0 was shrunk by 10MB 00:05:37.750 EAL: Trying to obtain current memory policy. 00:05:37.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.750 EAL: Restoring previous memory policy: 4 00:05:37.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.750 EAL: request: mp_malloc_sync 00:05:37.750 EAL: No shared files mode enabled, IPC is disabled 00:05:37.750 EAL: Heap on socket 0 was expanded by 18MB 00:05:37.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.750 EAL: request: mp_malloc_sync 00:05:37.750 EAL: No shared files mode enabled, IPC is disabled 00:05:37.750 EAL: Heap on socket 0 was shrunk by 18MB 00:05:37.750 EAL: Trying to obtain current memory policy. 00:05:37.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.750 EAL: Restoring previous memory policy: 4 00:05:37.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.750 EAL: request: mp_malloc_sync 00:05:37.750 EAL: No shared files mode enabled, IPC is disabled 00:05:37.750 EAL: Heap on socket 0 was expanded by 34MB 00:05:37.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.750 EAL: request: mp_malloc_sync 00:05:37.750 EAL: No shared files mode enabled, IPC is disabled 00:05:37.750 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.750 EAL: Trying to obtain current memory policy. 00:05:37.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.750 EAL: Restoring previous memory policy: 4 00:05:37.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.750 EAL: request: mp_malloc_sync 00:05:37.750 EAL: No shared files mode enabled, IPC is disabled 00:05:37.750 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.750 EAL: request: mp_malloc_sync 00:05:37.750 EAL: No shared files mode enabled, IPC is disabled 00:05:37.750 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.750 EAL: Trying to obtain current memory policy. 00:05:37.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.750 EAL: Restoring previous memory policy: 4 00:05:37.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.750 EAL: request: mp_malloc_sync 00:05:37.750 EAL: No shared files mode enabled, IPC is disabled 00:05:37.750 EAL: Heap on socket 0 was expanded by 130MB 00:05:37.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.011 EAL: request: mp_malloc_sync 00:05:38.011 EAL: No shared files mode enabled, IPC is disabled 00:05:38.011 EAL: Heap on socket 0 was shrunk by 130MB 00:05:38.011 EAL: Trying to obtain current memory policy. 00:05:38.011 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.011 EAL: Restoring previous memory policy: 4 00:05:38.011 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.011 EAL: request: mp_malloc_sync 00:05:38.011 EAL: No shared files mode enabled, IPC is disabled 00:05:38.011 EAL: Heap on socket 0 was expanded by 258MB 00:05:38.011 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.011 EAL: request: mp_malloc_sync 00:05:38.011 EAL: No shared files mode enabled, IPC is disabled 00:05:38.011 EAL: Heap on socket 0 was shrunk by 258MB 00:05:38.011 EAL: Trying to obtain current memory policy. 00:05:38.011 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.012 EAL: Restoring previous memory policy: 4 00:05:38.012 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.012 EAL: request: mp_malloc_sync 00:05:38.012 EAL: No shared files mode enabled, IPC is disabled 00:05:38.012 EAL: Heap on socket 0 was expanded by 514MB 00:05:38.012 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.273 EAL: request: mp_malloc_sync 00:05:38.273 EAL: No shared files mode enabled, IPC is disabled 00:05:38.273 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.273 EAL: Trying to obtain current memory policy. 00:05:38.273 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.273 EAL: Restoring previous memory policy: 4 00:05:38.273 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.273 EAL: request: mp_malloc_sync 00:05:38.273 EAL: No shared files mode enabled, IPC is disabled 00:05:38.273 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.273 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.535 passed 00:05:38.535 00:05:38.535 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.535 suites 1 1 n/a 0 0 00:05:38.535 tests 2 2 2 0 0 00:05:38.535 asserts 5421 5421 5421 0 n/a 00:05:38.535 00:05:38.535 Elapsed time = 0.934 seconds 00:05:38.535 EAL: request: mp_malloc_sync 00:05:38.535 EAL: No shared files mode enabled, IPC is disabled 00:05:38.535 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.535 EAL: request: mp_malloc_sync 00:05:38.535 EAL: No shared files mode enabled, IPC is disabled 00:05:38.535 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.535 EAL: No shared files mode enabled, IPC is disabled 00:05:38.535 EAL: No shared files mode enabled, IPC is disabled 00:05:38.535 EAL: No shared files mode enabled, IPC is disabled 00:05:38.535 00:05:38.535 real 0m1.138s 00:05:38.535 user 0m0.449s 00:05:38.535 sys 0m0.558s 00:05:38.535 ************************************ 00:05:38.535 END TEST env_vtophys 00:05:38.535 ************************************ 00:05:38.535 23:54:28 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.535 23:54:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:38.535 23:54:28 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:38.535 23:54:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.535 23:54:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.535 23:54:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.535 ************************************ 00:05:38.535 START TEST env_pci 00:05:38.535 ************************************ 00:05:38.535 23:54:28 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:38.535 00:05:38.535 00:05:38.535 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.535 http://cunit.sourceforge.net/ 00:05:38.535 00:05:38.535 00:05:38.535 Suite: pci 00:05:38.535 Test: pci_hook ...[2024-11-20 23:54:28.860698] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69334 has claimed it 00:05:38.535 passed 00:05:38.535 00:05:38.535 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.535 suites 1 1 n/a 0 0 00:05:38.535 tests 1 1 1 0 0 00:05:38.535 asserts 25 25 25 0 n/a 00:05:38.535 00:05:38.535 Elapsed time = 0.006 seconds 00:05:38.535 EAL: Cannot find device (10000:00:01.0) 00:05:38.535 EAL: Failed to attach device on primary process 00:05:38.535 ************************************ 00:05:38.535 END TEST env_pci 00:05:38.535 ************************************ 00:05:38.535 00:05:38.535 real 0m0.050s 00:05:38.535 user 0m0.022s 00:05:38.535 sys 0m0.027s 00:05:38.535 23:54:28 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.535 23:54:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:38.535 23:54:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:38.535 23:54:28 env -- env/env.sh@15 -- # uname 00:05:38.535 23:54:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:38.535 23:54:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:38.535 23:54:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.535 23:54:28 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:38.535 23:54:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.535 23:54:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.535 ************************************ 00:05:38.535 START TEST env_dpdk_post_init 00:05:38.535 ************************************ 00:05:38.535 23:54:28 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.796 EAL: Detected CPU lcores: 10 00:05:38.796 EAL: Detected NUMA nodes: 1 00:05:38.796 EAL: Detected shared linkage of DPDK 00:05:38.796 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.796 EAL: Selected IOVA mode 'PA' 00:05:38.796 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:38.796 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:38.796 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:38.796 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:38.796 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:38.796 Starting DPDK initialization... 00:05:38.796 Starting SPDK post initialization... 00:05:38.796 SPDK NVMe probe 00:05:38.796 Attaching to 0000:00:10.0 00:05:38.796 Attaching to 0000:00:11.0 00:05:38.796 Attaching to 0000:00:12.0 00:05:38.796 Attaching to 0000:00:13.0 00:05:38.796 Attached to 0000:00:10.0 00:05:38.796 Attached to 0000:00:11.0 00:05:38.796 Attached to 0000:00:13.0 00:05:38.796 Attached to 0000:00:12.0 00:05:38.796 Cleaning up... 00:05:38.796 00:05:38.796 real 0m0.196s 00:05:38.796 user 0m0.047s 00:05:38.796 sys 0m0.051s 00:05:38.796 23:54:29 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.796 ************************************ 00:05:38.796 END TEST env_dpdk_post_init 00:05:38.796 ************************************ 00:05:38.796 23:54:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:38.796 23:54:29 env -- env/env.sh@26 -- # uname 00:05:38.796 23:54:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:38.796 23:54:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:38.796 23:54:29 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.796 23:54:29 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.796 23:54:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.796 ************************************ 00:05:38.796 START TEST env_mem_callbacks 00:05:38.796 ************************************ 00:05:38.796 23:54:29 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:38.796 EAL: Detected CPU lcores: 10 00:05:38.796 EAL: Detected NUMA nodes: 1 00:05:38.796 EAL: Detected shared linkage of DPDK 00:05:38.796 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.796 EAL: Selected IOVA mode 'PA' 00:05:39.057 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.057 00:05:39.057 00:05:39.057 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.057 http://cunit.sourceforge.net/ 00:05:39.057 00:05:39.057 00:05:39.057 Suite: memory 00:05:39.057 Test: test ... 00:05:39.057 register 0x200000200000 2097152 00:05:39.057 malloc 3145728 00:05:39.057 register 0x200000400000 4194304 00:05:39.057 buf 0x200000500000 len 3145728 PASSED 00:05:39.057 malloc 64 00:05:39.057 buf 0x2000004fff40 len 64 PASSED 00:05:39.057 malloc 4194304 00:05:39.057 register 0x200000800000 6291456 00:05:39.057 buf 0x200000a00000 len 4194304 PASSED 00:05:39.057 free 0x200000500000 3145728 00:05:39.057 free 0x2000004fff40 64 00:05:39.057 unregister 0x200000400000 4194304 PASSED 00:05:39.057 free 0x200000a00000 4194304 00:05:39.057 unregister 0x200000800000 6291456 PASSED 00:05:39.057 malloc 8388608 00:05:39.057 register 0x200000400000 10485760 00:05:39.057 buf 0x200000600000 len 8388608 PASSED 00:05:39.057 free 0x200000600000 8388608 00:05:39.057 unregister 0x200000400000 10485760 PASSED 00:05:39.057 passed 00:05:39.057 00:05:39.057 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.057 suites 1 1 n/a 0 0 00:05:39.057 tests 1 1 1 0 0 00:05:39.057 asserts 15 15 15 0 n/a 00:05:39.057 00:05:39.057 Elapsed time = 0.007 seconds 00:05:39.057 00:05:39.057 real 0m0.150s 00:05:39.057 user 0m0.020s 00:05:39.057 sys 0m0.028s 00:05:39.057 23:54:29 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.057 23:54:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:39.057 ************************************ 00:05:39.057 END TEST env_mem_callbacks 00:05:39.057 ************************************ 00:05:39.057 00:05:39.057 real 0m2.155s 00:05:39.057 user 0m0.933s 00:05:39.057 sys 0m0.873s 00:05:39.057 23:54:29 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.057 23:54:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.057 ************************************ 00:05:39.057 END TEST env 00:05:39.057 ************************************ 00:05:39.057 23:54:29 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.057 23:54:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.057 23:54:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.057 23:54:29 -- common/autotest_common.sh@10 -- # set +x 00:05:39.057 ************************************ 00:05:39.057 START TEST rpc 00:05:39.057 ************************************ 00:05:39.057 23:54:29 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.057 * Looking for test storage... 00:05:39.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.057 23:54:29 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.057 23:54:29 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.057 23:54:29 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:39.319 23:54:29 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:39.319 23:54:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.319 23:54:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.319 23:54:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.319 23:54:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.319 23:54:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.319 23:54:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.319 23:54:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.319 23:54:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.319 23:54:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.319 23:54:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.319 23:54:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.319 23:54:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.319 23:54:29 rpc -- scripts/common.sh@345 -- # : 1 00:05:39.319 23:54:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.319 23:54:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.319 23:54:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:39.319 23:54:29 rpc -- scripts/common.sh@353 -- # local d=1 00:05:39.319 23:54:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.319 23:54:29 rpc -- scripts/common.sh@355 -- # echo 1 00:05:39.319 23:54:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.319 23:54:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:39.319 23:54:29 rpc -- scripts/common.sh@353 -- # local d=2 00:05:39.319 23:54:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.319 23:54:29 rpc -- scripts/common.sh@355 -- # echo 2 00:05:39.319 23:54:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.319 23:54:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.319 23:54:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.319 23:54:29 rpc -- scripts/common.sh@368 -- # return 0 00:05:39.319 23:54:29 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.319 23:54:29 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:39.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.319 --rc genhtml_branch_coverage=1 00:05:39.319 --rc genhtml_function_coverage=1 00:05:39.319 --rc genhtml_legend=1 00:05:39.319 --rc geninfo_all_blocks=1 00:05:39.319 --rc geninfo_unexecuted_blocks=1 00:05:39.319 00:05:39.319 ' 00:05:39.319 23:54:29 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:39.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.319 --rc genhtml_branch_coverage=1 00:05:39.319 --rc genhtml_function_coverage=1 00:05:39.319 --rc genhtml_legend=1 00:05:39.319 --rc geninfo_all_blocks=1 00:05:39.319 --rc geninfo_unexecuted_blocks=1 00:05:39.319 00:05:39.319 ' 00:05:39.319 23:54:29 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:39.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.319 --rc genhtml_branch_coverage=1 00:05:39.319 --rc genhtml_function_coverage=1 00:05:39.319 --rc genhtml_legend=1 00:05:39.319 --rc geninfo_all_blocks=1 00:05:39.319 --rc geninfo_unexecuted_blocks=1 00:05:39.319 00:05:39.319 ' 00:05:39.319 23:54:29 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:39.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.319 --rc genhtml_branch_coverage=1 00:05:39.319 --rc genhtml_function_coverage=1 00:05:39.319 --rc genhtml_legend=1 00:05:39.319 --rc geninfo_all_blocks=1 00:05:39.319 --rc geninfo_unexecuted_blocks=1 00:05:39.319 00:05:39.319 ' 00:05:39.319 23:54:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69450 00:05:39.319 23:54:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.319 23:54:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69450 00:05:39.319 23:54:29 rpc -- common/autotest_common.sh@831 -- # '[' -z 69450 ']' 00:05:39.319 23:54:29 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.319 23:54:29 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.319 23:54:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:39.319 23:54:29 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.319 23:54:29 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.319 23:54:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.319 [2024-11-20 23:54:29.604153] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:39.319 [2024-11-20 23:54:29.604389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69450 ] 00:05:39.581 [2024-11-20 23:54:29.744095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.581 [2024-11-20 23:54:29.774975] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:39.581 [2024-11-20 23:54:29.775176] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69450' to capture a snapshot of events at runtime. 00:05:39.581 [2024-11-20 23:54:29.775195] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:39.581 [2024-11-20 23:54:29.775204] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:39.581 [2024-11-20 23:54:29.775217] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69450 for offline analysis/debug. 00:05:39.581 [2024-11-20 23:54:29.775251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.154 23:54:30 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.154 23:54:30 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:40.154 23:54:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.154 23:54:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.154 23:54:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:40.154 23:54:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:40.154 23:54:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.154 23:54:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.154 23:54:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.154 ************************************ 00:05:40.154 START TEST rpc_integrity 00:05:40.154 ************************************ 00:05:40.154 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:40.154 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:40.154 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.154 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.154 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.154 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:40.154 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:40.154 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.154 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.154 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.154 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.154 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.154 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:40.154 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.154 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.154 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.154 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.155 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.155 { 00:05:40.155 "name": "Malloc0", 00:05:40.155 "aliases": [ 00:05:40.155 "d4af837e-1479-4ac1-ac61-a74d8c354dfe" 00:05:40.155 ], 00:05:40.155 "product_name": "Malloc disk", 00:05:40.155 "block_size": 512, 00:05:40.155 "num_blocks": 16384, 00:05:40.155 "uuid": "d4af837e-1479-4ac1-ac61-a74d8c354dfe", 00:05:40.155 "assigned_rate_limits": { 00:05:40.155 "rw_ios_per_sec": 0, 00:05:40.155 "rw_mbytes_per_sec": 0, 00:05:40.155 "r_mbytes_per_sec": 0, 00:05:40.155 "w_mbytes_per_sec": 0 00:05:40.155 }, 00:05:40.155 "claimed": false, 00:05:40.155 "zoned": false, 00:05:40.155 "supported_io_types": { 00:05:40.155 "read": true, 00:05:40.155 "write": true, 00:05:40.155 "unmap": true, 00:05:40.155 "flush": true, 00:05:40.155 "reset": true, 00:05:40.155 "nvme_admin": false, 00:05:40.155 "nvme_io": false, 00:05:40.155 "nvme_io_md": false, 00:05:40.155 "write_zeroes": true, 00:05:40.155 "zcopy": true, 00:05:40.155 "get_zone_info": false, 00:05:40.155 "zone_management": false, 00:05:40.155 "zone_append": false, 00:05:40.155 "compare": false, 00:05:40.155 "compare_and_write": false, 00:05:40.155 "abort": true, 00:05:40.155 "seek_hole": false, 00:05:40.155 "seek_data": false, 00:05:40.155 "copy": true, 00:05:40.155 "nvme_iov_md": false 00:05:40.155 }, 00:05:40.155 "memory_domains": [ 00:05:40.155 { 00:05:40.155 "dma_device_id": "system", 00:05:40.155 "dma_device_type": 1 00:05:40.155 }, 00:05:40.155 { 00:05:40.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.155 "dma_device_type": 2 00:05:40.155 } 00:05:40.155 ], 00:05:40.155 "driver_specific": {} 00:05:40.155 } 00:05:40.155 ]' 00:05:40.155 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:40.155 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.155 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:40.155 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.155 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.155 [2024-11-20 23:54:30.553016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:40.155 [2024-11-20 23:54:30.553077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.155 [2024-11-20 23:54:30.553104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:40.155 [2024-11-20 23:54:30.553113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.155 [2024-11-20 23:54:30.555323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.155 [2024-11-20 23:54:30.555357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.155 Passthru0 00:05:40.155 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.155 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.155 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.155 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.416 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.416 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.416 { 00:05:40.416 "name": "Malloc0", 00:05:40.416 "aliases": [ 00:05:40.416 "d4af837e-1479-4ac1-ac61-a74d8c354dfe" 00:05:40.416 ], 00:05:40.416 "product_name": "Malloc disk", 00:05:40.416 "block_size": 512, 00:05:40.416 "num_blocks": 16384, 00:05:40.416 "uuid": "d4af837e-1479-4ac1-ac61-a74d8c354dfe", 00:05:40.416 "assigned_rate_limits": { 00:05:40.416 "rw_ios_per_sec": 0, 00:05:40.416 "rw_mbytes_per_sec": 0, 00:05:40.416 "r_mbytes_per_sec": 0, 00:05:40.416 "w_mbytes_per_sec": 0 00:05:40.416 }, 00:05:40.416 "claimed": true, 00:05:40.416 "claim_type": "exclusive_write", 00:05:40.416 "zoned": false, 00:05:40.416 "supported_io_types": { 00:05:40.416 "read": true, 00:05:40.416 "write": true, 00:05:40.416 "unmap": true, 00:05:40.416 "flush": true, 00:05:40.416 "reset": true, 00:05:40.416 "nvme_admin": false, 00:05:40.416 "nvme_io": false, 00:05:40.416 "nvme_io_md": false, 00:05:40.416 "write_zeroes": true, 00:05:40.416 "zcopy": true, 00:05:40.416 "get_zone_info": false, 00:05:40.416 "zone_management": false, 00:05:40.416 "zone_append": false, 00:05:40.416 "compare": false, 00:05:40.416 "compare_and_write": false, 00:05:40.416 "abort": true, 00:05:40.416 "seek_hole": false, 00:05:40.416 "seek_data": false, 00:05:40.416 "copy": true, 00:05:40.416 "nvme_iov_md": false 00:05:40.416 }, 00:05:40.416 "memory_domains": [ 00:05:40.416 { 00:05:40.416 "dma_device_id": "system", 00:05:40.416 "dma_device_type": 1 00:05:40.416 }, 00:05:40.416 { 00:05:40.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.416 "dma_device_type": 2 00:05:40.416 } 00:05:40.416 ], 00:05:40.416 "driver_specific": {} 00:05:40.416 }, 00:05:40.416 { 00:05:40.416 "name": "Passthru0", 00:05:40.416 "aliases": [ 00:05:40.416 "a1583c91-5aae-5320-b2f6-677d126ea97b" 00:05:40.416 ], 00:05:40.416 "product_name": "passthru", 00:05:40.416 "block_size": 512, 00:05:40.416 "num_blocks": 16384, 00:05:40.416 "uuid": "a1583c91-5aae-5320-b2f6-677d126ea97b", 00:05:40.416 "assigned_rate_limits": { 00:05:40.416 "rw_ios_per_sec": 0, 00:05:40.416 "rw_mbytes_per_sec": 0, 00:05:40.416 "r_mbytes_per_sec": 0, 00:05:40.416 "w_mbytes_per_sec": 0 00:05:40.416 }, 00:05:40.416 "claimed": false, 00:05:40.416 "zoned": false, 00:05:40.416 "supported_io_types": { 00:05:40.416 "read": true, 00:05:40.416 "write": true, 00:05:40.416 "unmap": true, 00:05:40.416 "flush": true, 00:05:40.416 "reset": true, 00:05:40.416 "nvme_admin": false, 00:05:40.416 "nvme_io": false, 00:05:40.416 "nvme_io_md": false, 00:05:40.416 "write_zeroes": true, 00:05:40.416 "zcopy": true, 00:05:40.416 "get_zone_info": false, 00:05:40.416 "zone_management": false, 00:05:40.416 "zone_append": false, 00:05:40.416 "compare": false, 00:05:40.416 "compare_and_write": false, 00:05:40.416 "abort": true, 00:05:40.416 "seek_hole": false, 00:05:40.416 "seek_data": false, 00:05:40.416 "copy": true, 00:05:40.416 "nvme_iov_md": false 00:05:40.416 }, 00:05:40.416 "memory_domains": [ 00:05:40.416 { 00:05:40.416 "dma_device_id": "system", 00:05:40.416 "dma_device_type": 1 00:05:40.416 }, 00:05:40.416 { 00:05:40.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.416 "dma_device_type": 2 00:05:40.416 } 00:05:40.416 ], 00:05:40.416 "driver_specific": { 00:05:40.416 "passthru": { 00:05:40.416 "name": "Passthru0", 00:05:40.416 "base_bdev_name": "Malloc0" 00:05:40.416 } 00:05:40.416 } 00:05:40.416 } 00:05:40.416 ]' 00:05:40.416 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:40.416 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.416 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.416 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.416 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.416 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.416 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:40.416 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.416 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.416 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.416 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:40.416 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.416 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.416 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.416 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:40.416 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:40.416 ************************************ 00:05:40.416 END TEST rpc_integrity 00:05:40.416 ************************************ 00:05:40.416 23:54:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:40.416 00:05:40.416 real 0m0.232s 00:05:40.416 user 0m0.131s 00:05:40.416 sys 0m0.038s 00:05:40.416 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.416 23:54:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.417 23:54:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:40.417 23:54:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.417 23:54:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.417 23:54:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.417 ************************************ 00:05:40.417 START TEST rpc_plugins 00:05:40.417 ************************************ 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:40.417 23:54:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.417 23:54:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:40.417 23:54:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.417 23:54:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:40.417 { 00:05:40.417 "name": "Malloc1", 00:05:40.417 "aliases": [ 00:05:40.417 "987f3f7b-c38c-466b-9aa4-3acf30553db2" 00:05:40.417 ], 00:05:40.417 "product_name": "Malloc disk", 00:05:40.417 "block_size": 4096, 00:05:40.417 "num_blocks": 256, 00:05:40.417 "uuid": "987f3f7b-c38c-466b-9aa4-3acf30553db2", 00:05:40.417 "assigned_rate_limits": { 00:05:40.417 "rw_ios_per_sec": 0, 00:05:40.417 "rw_mbytes_per_sec": 0, 00:05:40.417 "r_mbytes_per_sec": 0, 00:05:40.417 "w_mbytes_per_sec": 0 00:05:40.417 }, 00:05:40.417 "claimed": false, 00:05:40.417 "zoned": false, 00:05:40.417 "supported_io_types": { 00:05:40.417 "read": true, 00:05:40.417 "write": true, 00:05:40.417 "unmap": true, 00:05:40.417 "flush": true, 00:05:40.417 "reset": true, 00:05:40.417 "nvme_admin": false, 00:05:40.417 "nvme_io": false, 00:05:40.417 "nvme_io_md": false, 00:05:40.417 "write_zeroes": true, 00:05:40.417 "zcopy": true, 00:05:40.417 "get_zone_info": false, 00:05:40.417 "zone_management": false, 00:05:40.417 "zone_append": false, 00:05:40.417 "compare": false, 00:05:40.417 "compare_and_write": false, 00:05:40.417 "abort": true, 00:05:40.417 "seek_hole": false, 00:05:40.417 "seek_data": false, 00:05:40.417 "copy": true, 00:05:40.417 "nvme_iov_md": false 00:05:40.417 }, 00:05:40.417 "memory_domains": [ 00:05:40.417 { 00:05:40.417 "dma_device_id": "system", 00:05:40.417 "dma_device_type": 1 00:05:40.417 }, 00:05:40.417 { 00:05:40.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.417 "dma_device_type": 2 00:05:40.417 } 00:05:40.417 ], 00:05:40.417 "driver_specific": {} 00:05:40.417 } 00:05:40.417 ]' 00:05:40.417 23:54:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:40.417 23:54:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:40.417 23:54:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.417 23:54:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.417 23:54:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:40.417 23:54:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:40.417 23:54:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:40.417 00:05:40.417 real 0m0.120s 00:05:40.417 user 0m0.070s 00:05:40.417 sys 0m0.014s 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.417 23:54:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.417 ************************************ 00:05:40.417 END TEST rpc_plugins 00:05:40.417 ************************************ 00:05:40.677 23:54:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:40.677 23:54:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.677 23:54:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.677 23:54:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.677 ************************************ 00:05:40.677 START TEST rpc_trace_cmd_test 00:05:40.677 ************************************ 00:05:40.677 23:54:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:40.677 23:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:40.677 23:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:40.677 23:54:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.678 23:54:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.678 23:54:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.678 23:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:40.678 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69450", 00:05:40.678 "tpoint_group_mask": "0x8", 00:05:40.678 "iscsi_conn": { 00:05:40.678 "mask": "0x2", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "scsi": { 00:05:40.678 "mask": "0x4", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "bdev": { 00:05:40.678 "mask": "0x8", 00:05:40.678 "tpoint_mask": "0xffffffffffffffff" 00:05:40.678 }, 00:05:40.678 "nvmf_rdma": { 00:05:40.678 "mask": "0x10", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "nvmf_tcp": { 00:05:40.678 "mask": "0x20", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "ftl": { 00:05:40.678 "mask": "0x40", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "blobfs": { 00:05:40.678 "mask": "0x80", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "dsa": { 00:05:40.678 "mask": "0x200", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "thread": { 00:05:40.678 "mask": "0x400", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "nvme_pcie": { 00:05:40.678 "mask": "0x800", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "iaa": { 00:05:40.678 "mask": "0x1000", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "nvme_tcp": { 00:05:40.678 "mask": "0x2000", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "bdev_nvme": { 00:05:40.678 "mask": "0x4000", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "sock": { 00:05:40.678 "mask": "0x8000", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "blob": { 00:05:40.678 "mask": "0x10000", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 }, 00:05:40.678 "bdev_raid": { 00:05:40.678 "mask": "0x20000", 00:05:40.678 "tpoint_mask": "0x0" 00:05:40.678 } 00:05:40.678 }' 00:05:40.678 23:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:40.678 23:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:40.678 23:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:40.678 23:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:40.678 23:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:40.678 23:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:40.678 23:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:40.678 23:54:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:40.678 23:54:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:40.678 ************************************ 00:05:40.678 END TEST rpc_trace_cmd_test 00:05:40.678 ************************************ 00:05:40.678 23:54:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:40.678 00:05:40.678 real 0m0.164s 00:05:40.678 user 0m0.125s 00:05:40.678 sys 0m0.025s 00:05:40.678 23:54:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.678 23:54:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.678 23:54:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:40.678 23:54:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:40.678 23:54:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:40.678 23:54:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.678 23:54:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.678 23:54:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.678 ************************************ 00:05:40.678 START TEST rpc_daemon_integrity 00:05:40.678 ************************************ 00:05:40.678 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:40.678 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:40.678 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.678 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.678 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.678 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:40.939 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:40.939 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.939 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.939 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.940 { 00:05:40.940 "name": "Malloc2", 00:05:40.940 "aliases": [ 00:05:40.940 "af129a1b-07fc-4c18-bb52-aa315cd1386d" 00:05:40.940 ], 00:05:40.940 "product_name": "Malloc disk", 00:05:40.940 "block_size": 512, 00:05:40.940 "num_blocks": 16384, 00:05:40.940 "uuid": "af129a1b-07fc-4c18-bb52-aa315cd1386d", 00:05:40.940 "assigned_rate_limits": { 00:05:40.940 "rw_ios_per_sec": 0, 00:05:40.940 "rw_mbytes_per_sec": 0, 00:05:40.940 "r_mbytes_per_sec": 0, 00:05:40.940 "w_mbytes_per_sec": 0 00:05:40.940 }, 00:05:40.940 "claimed": false, 00:05:40.940 "zoned": false, 00:05:40.940 "supported_io_types": { 00:05:40.940 "read": true, 00:05:40.940 "write": true, 00:05:40.940 "unmap": true, 00:05:40.940 "flush": true, 00:05:40.940 "reset": true, 00:05:40.940 "nvme_admin": false, 00:05:40.940 "nvme_io": false, 00:05:40.940 "nvme_io_md": false, 00:05:40.940 "write_zeroes": true, 00:05:40.940 "zcopy": true, 00:05:40.940 "get_zone_info": false, 00:05:40.940 "zone_management": false, 00:05:40.940 "zone_append": false, 00:05:40.940 "compare": false, 00:05:40.940 "compare_and_write": false, 00:05:40.940 "abort": true, 00:05:40.940 "seek_hole": false, 00:05:40.940 "seek_data": false, 00:05:40.940 "copy": true, 00:05:40.940 "nvme_iov_md": false 00:05:40.940 }, 00:05:40.940 "memory_domains": [ 00:05:40.940 { 00:05:40.940 "dma_device_id": "system", 00:05:40.940 "dma_device_type": 1 00:05:40.940 }, 00:05:40.940 { 00:05:40.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.940 "dma_device_type": 2 00:05:40.940 } 00:05:40.940 ], 00:05:40.940 "driver_specific": {} 00:05:40.940 } 00:05:40.940 ]' 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.940 [2024-11-20 23:54:31.185351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:40.940 [2024-11-20 23:54:31.185406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.940 [2024-11-20 23:54:31.185428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:40.940 [2024-11-20 23:54:31.185438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.940 [2024-11-20 23:54:31.187605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.940 [2024-11-20 23:54:31.187640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.940 Passthru0 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.940 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.940 { 00:05:40.940 "name": "Malloc2", 00:05:40.940 "aliases": [ 00:05:40.940 "af129a1b-07fc-4c18-bb52-aa315cd1386d" 00:05:40.940 ], 00:05:40.940 "product_name": "Malloc disk", 00:05:40.940 "block_size": 512, 00:05:40.940 "num_blocks": 16384, 00:05:40.940 "uuid": "af129a1b-07fc-4c18-bb52-aa315cd1386d", 00:05:40.940 "assigned_rate_limits": { 00:05:40.940 "rw_ios_per_sec": 0, 00:05:40.940 "rw_mbytes_per_sec": 0, 00:05:40.940 "r_mbytes_per_sec": 0, 00:05:40.940 "w_mbytes_per_sec": 0 00:05:40.940 }, 00:05:40.940 "claimed": true, 00:05:40.940 "claim_type": "exclusive_write", 00:05:40.940 "zoned": false, 00:05:40.940 "supported_io_types": { 00:05:40.940 "read": true, 00:05:40.940 "write": true, 00:05:40.940 "unmap": true, 00:05:40.940 "flush": true, 00:05:40.940 "reset": true, 00:05:40.940 "nvme_admin": false, 00:05:40.940 "nvme_io": false, 00:05:40.940 "nvme_io_md": false, 00:05:40.940 "write_zeroes": true, 00:05:40.940 "zcopy": true, 00:05:40.940 "get_zone_info": false, 00:05:40.940 "zone_management": false, 00:05:40.940 "zone_append": false, 00:05:40.940 "compare": false, 00:05:40.940 "compare_and_write": false, 00:05:40.940 "abort": true, 00:05:40.940 "seek_hole": false, 00:05:40.940 "seek_data": false, 00:05:40.940 "copy": true, 00:05:40.940 "nvme_iov_md": false 00:05:40.940 }, 00:05:40.940 "memory_domains": [ 00:05:40.940 { 00:05:40.940 "dma_device_id": "system", 00:05:40.940 "dma_device_type": 1 00:05:40.940 }, 00:05:40.940 { 00:05:40.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.940 "dma_device_type": 2 00:05:40.940 } 00:05:40.940 ], 00:05:40.940 "driver_specific": {} 00:05:40.940 }, 00:05:40.940 { 00:05:40.940 "name": "Passthru0", 00:05:40.940 "aliases": [ 00:05:40.940 "5dcb1084-ef1b-56e9-a2c2-446294872364" 00:05:40.940 ], 00:05:40.940 "product_name": "passthru", 00:05:40.940 "block_size": 512, 00:05:40.940 "num_blocks": 16384, 00:05:40.940 "uuid": "5dcb1084-ef1b-56e9-a2c2-446294872364", 00:05:40.940 "assigned_rate_limits": { 00:05:40.940 "rw_ios_per_sec": 0, 00:05:40.940 "rw_mbytes_per_sec": 0, 00:05:40.940 "r_mbytes_per_sec": 0, 00:05:40.940 "w_mbytes_per_sec": 0 00:05:40.940 }, 00:05:40.940 "claimed": false, 00:05:40.940 "zoned": false, 00:05:40.940 "supported_io_types": { 00:05:40.940 "read": true, 00:05:40.940 "write": true, 00:05:40.940 "unmap": true, 00:05:40.940 "flush": true, 00:05:40.940 "reset": true, 00:05:40.941 "nvme_admin": false, 00:05:40.941 "nvme_io": false, 00:05:40.941 "nvme_io_md": false, 00:05:40.941 "write_zeroes": true, 00:05:40.941 "zcopy": true, 00:05:40.941 "get_zone_info": false, 00:05:40.941 "zone_management": false, 00:05:40.941 "zone_append": false, 00:05:40.941 "compare": false, 00:05:40.941 "compare_and_write": false, 00:05:40.941 "abort": true, 00:05:40.941 "seek_hole": false, 00:05:40.941 "seek_data": false, 00:05:40.941 "copy": true, 00:05:40.941 "nvme_iov_md": false 00:05:40.941 }, 00:05:40.941 "memory_domains": [ 00:05:40.941 { 00:05:40.941 "dma_device_id": "system", 00:05:40.941 "dma_device_type": 1 00:05:40.941 }, 00:05:40.941 { 00:05:40.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.941 "dma_device_type": 2 00:05:40.941 } 00:05:40.941 ], 00:05:40.941 "driver_specific": { 00:05:40.941 "passthru": { 00:05:40.941 "name": "Passthru0", 00:05:40.941 "base_bdev_name": "Malloc2" 00:05:40.941 } 00:05:40.941 } 00:05:40.941 } 00:05:40.941 ]' 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:40.941 ************************************ 00:05:40.941 END TEST rpc_daemon_integrity 00:05:40.941 ************************************ 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:40.941 00:05:40.941 real 0m0.220s 00:05:40.941 user 0m0.124s 00:05:40.941 sys 0m0.038s 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.941 23:54:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.941 23:54:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:40.941 23:54:31 rpc -- rpc/rpc.sh@84 -- # killprocess 69450 00:05:40.941 23:54:31 rpc -- common/autotest_common.sh@950 -- # '[' -z 69450 ']' 00:05:40.941 23:54:31 rpc -- common/autotest_common.sh@954 -- # kill -0 69450 00:05:40.941 23:54:31 rpc -- common/autotest_common.sh@955 -- # uname 00:05:40.941 23:54:31 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.941 23:54:31 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69450 00:05:41.202 23:54:31 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.202 23:54:31 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.202 23:54:31 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69450' 00:05:41.202 killing process with pid 69450 00:05:41.202 23:54:31 rpc -- common/autotest_common.sh@969 -- # kill 69450 00:05:41.202 23:54:31 rpc -- common/autotest_common.sh@974 -- # wait 69450 00:05:41.202 00:05:41.202 real 0m2.208s 00:05:41.202 user 0m2.652s 00:05:41.202 sys 0m0.570s 00:05:41.202 23:54:31 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.202 23:54:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.202 ************************************ 00:05:41.202 END TEST rpc 00:05:41.202 ************************************ 00:05:41.463 23:54:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:41.463 23:54:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.463 23:54:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.463 23:54:31 -- common/autotest_common.sh@10 -- # set +x 00:05:41.463 ************************************ 00:05:41.463 START TEST skip_rpc 00:05:41.463 ************************************ 00:05:41.463 23:54:31 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:41.463 * Looking for test storage... 00:05:41.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.463 23:54:31 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:41.463 23:54:31 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:41.463 23:54:31 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.463 23:54:31 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.463 23:54:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:41.463 23:54:31 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.463 23:54:31 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.464 --rc genhtml_branch_coverage=1 00:05:41.464 --rc genhtml_function_coverage=1 00:05:41.464 --rc genhtml_legend=1 00:05:41.464 --rc geninfo_all_blocks=1 00:05:41.464 --rc geninfo_unexecuted_blocks=1 00:05:41.464 00:05:41.464 ' 00:05:41.464 23:54:31 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.464 --rc genhtml_branch_coverage=1 00:05:41.464 --rc genhtml_function_coverage=1 00:05:41.464 --rc genhtml_legend=1 00:05:41.464 --rc geninfo_all_blocks=1 00:05:41.464 --rc geninfo_unexecuted_blocks=1 00:05:41.464 00:05:41.464 ' 00:05:41.464 23:54:31 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.464 --rc genhtml_branch_coverage=1 00:05:41.464 --rc genhtml_function_coverage=1 00:05:41.464 --rc genhtml_legend=1 00:05:41.464 --rc geninfo_all_blocks=1 00:05:41.464 --rc geninfo_unexecuted_blocks=1 00:05:41.464 00:05:41.464 ' 00:05:41.464 23:54:31 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.464 --rc genhtml_branch_coverage=1 00:05:41.464 --rc genhtml_function_coverage=1 00:05:41.464 --rc genhtml_legend=1 00:05:41.464 --rc geninfo_all_blocks=1 00:05:41.464 --rc geninfo_unexecuted_blocks=1 00:05:41.464 00:05:41.464 ' 00:05:41.464 23:54:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:41.464 23:54:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:41.464 23:54:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:41.464 23:54:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.464 23:54:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.464 23:54:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.464 ************************************ 00:05:41.464 START TEST skip_rpc 00:05:41.464 ************************************ 00:05:41.464 23:54:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:41.464 23:54:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69652 00:05:41.464 23:54:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.464 23:54:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:41.464 23:54:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:41.464 [2024-11-20 23:54:31.847841] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:41.464 [2024-11-20 23:54:31.847957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69652 ] 00:05:41.726 [2024-11-20 23:54:31.983973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.726 [2024-11-20 23:54:32.016547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69652 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69652 ']' 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69652 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.044 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69652 00:05:47.045 killing process with pid 69652 00:05:47.045 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.045 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.045 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69652' 00:05:47.045 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69652 00:05:47.045 23:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69652 00:05:47.045 ************************************ 00:05:47.045 END TEST skip_rpc 00:05:47.045 ************************************ 00:05:47.045 00:05:47.045 real 0m5.271s 00:05:47.045 user 0m4.934s 00:05:47.045 sys 0m0.232s 00:05:47.045 23:54:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.045 23:54:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.045 23:54:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:47.045 23:54:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.045 23:54:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.045 23:54:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.045 ************************************ 00:05:47.045 START TEST skip_rpc_with_json 00:05:47.045 ************************************ 00:05:47.045 23:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:47.045 23:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:47.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.045 23:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69739 00:05:47.045 23:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.045 23:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69739 00:05:47.045 23:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69739 ']' 00:05:47.045 23:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.045 23:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.045 23:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.045 23:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.045 23:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.045 23:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.045 [2024-11-20 23:54:37.168009] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:47.045 [2024-11-20 23:54:37.168108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69739 ] 00:05:47.045 [2024-11-20 23:54:37.297025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.045 [2024-11-20 23:54:37.327817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.616 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.616 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:47.616 23:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:47.616 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.616 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.616 [2024-11-20 23:54:38.012068] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:47.616 request: 00:05:47.616 { 00:05:47.616 "trtype": "tcp", 00:05:47.616 "method": "nvmf_get_transports", 00:05:47.616 "req_id": 1 00:05:47.616 } 00:05:47.616 Got JSON-RPC error response 00:05:47.616 response: 00:05:47.616 { 00:05:47.616 "code": -19, 00:05:47.616 "message": "No such device" 00:05:47.616 } 00:05:47.616 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:47.616 23:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:47.616 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.616 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.616 [2024-11-20 23:54:38.020157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:47.616 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.616 23:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:47.616 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.616 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.880 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.880 23:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:47.880 { 00:05:47.880 "subsystems": [ 00:05:47.880 { 00:05:47.880 "subsystem": "fsdev", 00:05:47.880 "config": [ 00:05:47.880 { 00:05:47.880 "method": "fsdev_set_opts", 00:05:47.880 "params": { 00:05:47.880 "fsdev_io_pool_size": 65535, 00:05:47.880 "fsdev_io_cache_size": 256 00:05:47.880 } 00:05:47.880 } 00:05:47.880 ] 00:05:47.880 }, 00:05:47.880 { 00:05:47.880 "subsystem": "keyring", 00:05:47.880 "config": [] 00:05:47.880 }, 00:05:47.880 { 00:05:47.880 "subsystem": "iobuf", 00:05:47.880 "config": [ 00:05:47.880 { 00:05:47.880 "method": "iobuf_set_options", 00:05:47.880 "params": { 00:05:47.880 "small_pool_count": 8192, 00:05:47.880 "large_pool_count": 1024, 00:05:47.880 "small_bufsize": 8192, 00:05:47.880 "large_bufsize": 135168 00:05:47.880 } 00:05:47.880 } 00:05:47.880 ] 00:05:47.880 }, 00:05:47.880 { 00:05:47.880 "subsystem": "sock", 00:05:47.880 "config": [ 00:05:47.880 { 00:05:47.880 "method": "sock_set_default_impl", 00:05:47.880 "params": { 00:05:47.880 "impl_name": "posix" 00:05:47.880 } 00:05:47.880 }, 00:05:47.880 { 00:05:47.880 "method": "sock_impl_set_options", 00:05:47.880 "params": { 00:05:47.880 "impl_name": "ssl", 00:05:47.880 "recv_buf_size": 4096, 00:05:47.880 "send_buf_size": 4096, 00:05:47.880 "enable_recv_pipe": true, 00:05:47.880 "enable_quickack": false, 00:05:47.880 "enable_placement_id": 0, 00:05:47.880 "enable_zerocopy_send_server": true, 00:05:47.880 "enable_zerocopy_send_client": false, 00:05:47.880 "zerocopy_threshold": 0, 00:05:47.880 "tls_version": 0, 00:05:47.880 "enable_ktls": false 00:05:47.880 } 00:05:47.880 }, 00:05:47.880 { 00:05:47.880 "method": "sock_impl_set_options", 00:05:47.880 "params": { 00:05:47.880 "impl_name": "posix", 00:05:47.880 "recv_buf_size": 2097152, 00:05:47.880 "send_buf_size": 2097152, 00:05:47.880 "enable_recv_pipe": true, 00:05:47.880 "enable_quickack": false, 00:05:47.880 "enable_placement_id": 0, 00:05:47.880 "enable_zerocopy_send_server": true, 00:05:47.880 "enable_zerocopy_send_client": false, 00:05:47.880 "zerocopy_threshold": 0, 00:05:47.880 "tls_version": 0, 00:05:47.880 "enable_ktls": false 00:05:47.880 } 00:05:47.880 } 00:05:47.880 ] 00:05:47.880 }, 00:05:47.880 { 00:05:47.880 "subsystem": "vmd", 00:05:47.880 "config": [] 00:05:47.880 }, 00:05:47.880 { 00:05:47.880 "subsystem": "accel", 00:05:47.880 "config": [ 00:05:47.880 { 00:05:47.880 "method": "accel_set_options", 00:05:47.880 "params": { 00:05:47.880 "small_cache_size": 128, 00:05:47.880 "large_cache_size": 16, 00:05:47.880 "task_count": 2048, 00:05:47.880 "sequence_count": 2048, 00:05:47.880 "buf_count": 2048 00:05:47.880 } 00:05:47.880 } 00:05:47.880 ] 00:05:47.880 }, 00:05:47.880 { 00:05:47.880 "subsystem": "bdev", 00:05:47.880 "config": [ 00:05:47.880 { 00:05:47.880 "method": "bdev_set_options", 00:05:47.880 "params": { 00:05:47.880 "bdev_io_pool_size": 65535, 00:05:47.880 "bdev_io_cache_size": 256, 00:05:47.880 "bdev_auto_examine": true, 00:05:47.880 "iobuf_small_cache_size": 128, 00:05:47.880 "iobuf_large_cache_size": 16 00:05:47.880 } 00:05:47.880 }, 00:05:47.880 { 00:05:47.880 "method": "bdev_raid_set_options", 00:05:47.880 "params": { 00:05:47.880 "process_window_size_kb": 1024, 00:05:47.880 "process_max_bandwidth_mb_sec": 0 00:05:47.880 } 00:05:47.880 }, 00:05:47.880 { 00:05:47.880 "method": "bdev_iscsi_set_options", 00:05:47.880 "params": { 00:05:47.880 "timeout_sec": 30 00:05:47.880 } 00:05:47.880 }, 00:05:47.880 { 00:05:47.880 "method": "bdev_nvme_set_options", 00:05:47.880 "params": { 00:05:47.880 "action_on_timeout": "none", 00:05:47.880 "timeout_us": 0, 00:05:47.880 "timeout_admin_us": 0, 00:05:47.880 "keep_alive_timeout_ms": 10000, 00:05:47.880 "arbitration_burst": 0, 00:05:47.880 "low_priority_weight": 0, 00:05:47.880 "medium_priority_weight": 0, 00:05:47.880 "high_priority_weight": 0, 00:05:47.880 "nvme_adminq_poll_period_us": 10000, 00:05:47.880 "nvme_ioq_poll_period_us": 0, 00:05:47.880 "io_queue_requests": 0, 00:05:47.880 "delay_cmd_submit": true, 00:05:47.880 "transport_retry_count": 4, 00:05:47.880 "bdev_retry_count": 3, 00:05:47.880 "transport_ack_timeout": 0, 00:05:47.880 "ctrlr_loss_timeout_sec": 0, 00:05:47.880 "reconnect_delay_sec": 0, 00:05:47.880 "fast_io_fail_timeout_sec": 0, 00:05:47.880 "disable_auto_failback": false, 00:05:47.880 "generate_uuids": false, 00:05:47.880 "transport_tos": 0, 00:05:47.880 "nvme_error_stat": false, 00:05:47.880 "rdma_srq_size": 0, 00:05:47.880 "io_path_stat": false, 00:05:47.880 "allow_accel_sequence": false, 00:05:47.880 "rdma_max_cq_size": 0, 00:05:47.881 "rdma_cm_event_timeout_ms": 0, 00:05:47.881 "dhchap_digests": [ 00:05:47.881 "sha256", 00:05:47.881 "sha384", 00:05:47.881 "sha512" 00:05:47.881 ], 00:05:47.881 "dhchap_dhgroups": [ 00:05:47.881 "null", 00:05:47.881 "ffdhe2048", 00:05:47.881 "ffdhe3072", 00:05:47.881 "ffdhe4096", 00:05:47.881 "ffdhe6144", 00:05:47.881 "ffdhe8192" 00:05:47.881 ] 00:05:47.881 } 00:05:47.881 }, 00:05:47.881 { 00:05:47.881 "method": "bdev_nvme_set_hotplug", 00:05:47.881 "params": { 00:05:47.881 "period_us": 100000, 00:05:47.881 "enable": false 00:05:47.881 } 00:05:47.881 }, 00:05:47.881 { 00:05:47.881 "method": "bdev_wait_for_examine" 00:05:47.881 } 00:05:47.881 ] 00:05:47.881 }, 00:05:47.881 { 00:05:47.881 "subsystem": "scsi", 00:05:47.881 "config": null 00:05:47.881 }, 00:05:47.881 { 00:05:47.881 "subsystem": "scheduler", 00:05:47.881 "config": [ 00:05:47.881 { 00:05:47.881 "method": "framework_set_scheduler", 00:05:47.881 "params": { 00:05:47.881 "name": "static" 00:05:47.881 } 00:05:47.881 } 00:05:47.881 ] 00:05:47.881 }, 00:05:47.881 { 00:05:47.881 "subsystem": "vhost_scsi", 00:05:47.881 "config": [] 00:05:47.881 }, 00:05:47.881 { 00:05:47.881 "subsystem": "vhost_blk", 00:05:47.881 "config": [] 00:05:47.881 }, 00:05:47.881 { 00:05:47.881 "subsystem": "ublk", 00:05:47.881 "config": [] 00:05:47.881 }, 00:05:47.881 { 00:05:47.881 "subsystem": "nbd", 00:05:47.881 "config": [] 00:05:47.881 }, 00:05:47.881 { 00:05:47.881 "subsystem": "nvmf", 00:05:47.881 "config": [ 00:05:47.881 { 00:05:47.881 "method": "nvmf_set_config", 00:05:47.881 "params": { 00:05:47.881 "discovery_filter": "match_any", 00:05:47.881 "admin_cmd_passthru": { 00:05:47.881 "identify_ctrlr": false 00:05:47.881 }, 00:05:47.881 "dhchap_digests": [ 00:05:47.881 "sha256", 00:05:47.881 "sha384", 00:05:47.881 "sha512" 00:05:47.881 ], 00:05:47.881 "dhchap_dhgroups": [ 00:05:47.881 "null", 00:05:47.881 "ffdhe2048", 00:05:47.881 "ffdhe3072", 00:05:47.881 "ffdhe4096", 00:05:47.881 "ffdhe6144", 00:05:47.881 "ffdhe8192" 00:05:47.881 ] 00:05:47.881 } 00:05:47.881 }, 00:05:47.881 { 00:05:47.881 "method": "nvmf_set_max_subsystems", 00:05:47.881 "params": { 00:05:47.881 "max_subsystems": 1024 00:05:47.881 } 00:05:47.881 }, 00:05:47.881 { 00:05:47.881 "method": "nvmf_set_crdt", 00:05:47.881 "params": { 00:05:47.881 "crdt1": 0, 00:05:47.881 "crdt2": 0, 00:05:47.881 "crdt3": 0 00:05:47.881 } 00:05:47.881 }, 00:05:47.881 { 00:05:47.881 "method": "nvmf_create_transport", 00:05:47.881 "params": { 00:05:47.881 "trtype": "TCP", 00:05:47.881 "max_queue_depth": 128, 00:05:47.881 "max_io_qpairs_per_ctrlr": 127, 00:05:47.881 "in_capsule_data_size": 4096, 00:05:47.881 "max_io_size": 131072, 00:05:47.881 "io_unit_size": 131072, 00:05:47.881 "max_aq_depth": 128, 00:05:47.881 "num_shared_buffers": 511, 00:05:47.881 "buf_cache_size": 4294967295, 00:05:47.881 "dif_insert_or_strip": false, 00:05:47.881 "zcopy": false, 00:05:47.881 "c2h_success": true, 00:05:47.881 "sock_priority": 0, 00:05:47.881 "abort_timeout_sec": 1, 00:05:47.881 "ack_timeout": 0, 00:05:47.881 "data_wr_pool_size": 0 00:05:47.881 } 00:05:47.881 } 00:05:47.881 ] 00:05:47.881 }, 00:05:47.881 { 00:05:47.881 "subsystem": "iscsi", 00:05:47.881 "config": [ 00:05:47.881 { 00:05:47.881 "method": "iscsi_set_options", 00:05:47.881 "params": { 00:05:47.881 "node_base": "iqn.2016-06.io.spdk", 00:05:47.881 "max_sessions": 128, 00:05:47.881 "max_connections_per_session": 2, 00:05:47.881 "max_queue_depth": 64, 00:05:47.881 "default_time2wait": 2, 00:05:47.881 "default_time2retain": 20, 00:05:47.881 "first_burst_length": 8192, 00:05:47.881 "immediate_data": true, 00:05:47.881 "allow_duplicated_isid": false, 00:05:47.881 "error_recovery_level": 0, 00:05:47.881 "nop_timeout": 60, 00:05:47.881 "nop_in_interval": 30, 00:05:47.881 "disable_chap": false, 00:05:47.881 "require_chap": false, 00:05:47.881 "mutual_chap": false, 00:05:47.881 "chap_group": 0, 00:05:47.881 "max_large_datain_per_connection": 64, 00:05:47.881 "max_r2t_per_connection": 4, 00:05:47.881 "pdu_pool_size": 36864, 00:05:47.881 "immediate_data_pool_size": 16384, 00:05:47.881 "data_out_pool_size": 2048 00:05:47.881 } 00:05:47.881 } 00:05:47.881 ] 00:05:47.881 } 00:05:47.881 ] 00:05:47.881 } 00:05:47.881 23:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:47.881 23:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69739 00:05:47.881 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69739 ']' 00:05:47.881 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69739 00:05:47.881 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:47.881 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.881 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69739 00:05:47.881 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.881 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.881 killing process with pid 69739 00:05:47.881 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69739' 00:05:47.881 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69739 00:05:47.881 23:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69739 00:05:48.143 23:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69762 00:05:48.143 23:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:48.143 23:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69762 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69762 ']' 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69762 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69762 00:05:53.436 killing process with pid 69762 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69762' 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69762 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69762 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:53.436 ************************************ 00:05:53.436 END TEST skip_rpc_with_json 00:05:53.436 ************************************ 00:05:53.436 00:05:53.436 real 0m6.602s 00:05:53.436 user 0m6.331s 00:05:53.436 sys 0m0.499s 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.436 23:54:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:53.436 23:54:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.436 23:54:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.436 23:54:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.436 ************************************ 00:05:53.436 START TEST skip_rpc_with_delay 00:05:53.436 ************************************ 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:53.436 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:53.436 [2024-11-20 23:54:43.847400] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:53.436 [2024-11-20 23:54:43.847509] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:53.698 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:53.698 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.698 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.698 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.698 00:05:53.698 real 0m0.117s 00:05:53.698 user 0m0.063s 00:05:53.698 sys 0m0.052s 00:05:53.698 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.698 ************************************ 00:05:53.698 END TEST skip_rpc_with_delay 00:05:53.698 ************************************ 00:05:53.698 23:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:53.698 23:54:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:53.698 23:54:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:53.698 23:54:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:53.698 23:54:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.698 23:54:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.698 23:54:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.698 ************************************ 00:05:53.698 START TEST exit_on_failed_rpc_init 00:05:53.698 ************************************ 00:05:53.698 23:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:53.698 23:54:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69874 00:05:53.698 23:54:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69874 00:05:53.698 23:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69874 ']' 00:05:53.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.698 23:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.698 23:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.698 23:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.698 23:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.698 23:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.698 23:54:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.698 [2024-11-20 23:54:44.021899] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:53.698 [2024-11-20 23:54:44.022039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69874 ] 00:05:53.960 [2024-11-20 23:54:44.158619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.960 [2024-11-20 23:54:44.208520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:54.533 23:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:54.533 [2024-11-20 23:54:44.894559] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:54.533 [2024-11-20 23:54:44.894675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69886 ] 00:05:54.793 [2024-11-20 23:54:45.027946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.793 [2024-11-20 23:54:45.066336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.793 [2024-11-20 23:54:45.066431] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:54.793 [2024-11-20 23:54:45.066447] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:54.793 [2024-11-20 23:54:45.066457] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:54.793 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:54.793 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69874 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69874 ']' 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69874 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69874 00:05:54.794 killing process with pid 69874 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69874' 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69874 00:05:54.794 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69874 00:05:55.055 ************************************ 00:05:55.055 END TEST exit_on_failed_rpc_init 00:05:55.055 ************************************ 00:05:55.055 00:05:55.055 real 0m1.462s 00:05:55.055 user 0m1.484s 00:05:55.055 sys 0m0.461s 00:05:55.055 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.055 23:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.055 23:54:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.055 ************************************ 00:05:55.055 END TEST skip_rpc 00:05:55.055 ************************************ 00:05:55.055 00:05:55.055 real 0m13.813s 00:05:55.055 user 0m12.954s 00:05:55.055 sys 0m1.416s 00:05:55.055 23:54:45 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.055 23:54:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.055 23:54:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:55.055 23:54:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.055 23:54:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.055 23:54:45 -- common/autotest_common.sh@10 -- # set +x 00:05:55.320 ************************************ 00:05:55.320 START TEST rpc_client 00:05:55.320 ************************************ 00:05:55.320 23:54:45 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:55.320 * Looking for test storage... 00:05:55.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:55.320 23:54:45 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.320 23:54:45 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:55.320 23:54:45 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.320 23:54:45 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.320 23:54:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:55.320 23:54:45 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.320 23:54:45 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:55.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.320 --rc genhtml_branch_coverage=1 00:05:55.320 --rc genhtml_function_coverage=1 00:05:55.320 --rc genhtml_legend=1 00:05:55.320 --rc geninfo_all_blocks=1 00:05:55.320 --rc geninfo_unexecuted_blocks=1 00:05:55.320 00:05:55.320 ' 00:05:55.320 23:54:45 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:55.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.320 --rc genhtml_branch_coverage=1 00:05:55.320 --rc genhtml_function_coverage=1 00:05:55.320 --rc genhtml_legend=1 00:05:55.320 --rc geninfo_all_blocks=1 00:05:55.320 --rc geninfo_unexecuted_blocks=1 00:05:55.320 00:05:55.320 ' 00:05:55.320 23:54:45 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:55.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.320 --rc genhtml_branch_coverage=1 00:05:55.320 --rc genhtml_function_coverage=1 00:05:55.320 --rc genhtml_legend=1 00:05:55.320 --rc geninfo_all_blocks=1 00:05:55.320 --rc geninfo_unexecuted_blocks=1 00:05:55.320 00:05:55.320 ' 00:05:55.320 23:54:45 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:55.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.320 --rc genhtml_branch_coverage=1 00:05:55.320 --rc genhtml_function_coverage=1 00:05:55.320 --rc genhtml_legend=1 00:05:55.320 --rc geninfo_all_blocks=1 00:05:55.320 --rc geninfo_unexecuted_blocks=1 00:05:55.320 00:05:55.320 ' 00:05:55.320 23:54:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:55.320 OK 00:05:55.320 23:54:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:55.320 ************************************ 00:05:55.320 END TEST rpc_client 00:05:55.320 ************************************ 00:05:55.320 00:05:55.320 real 0m0.184s 00:05:55.320 user 0m0.111s 00:05:55.320 sys 0m0.080s 00:05:55.320 23:54:45 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.320 23:54:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:55.320 23:54:45 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:55.320 23:54:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.320 23:54:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.320 23:54:45 -- common/autotest_common.sh@10 -- # set +x 00:05:55.320 ************************************ 00:05:55.320 START TEST json_config 00:05:55.320 ************************************ 00:05:55.320 23:54:45 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:55.593 23:54:45 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.593 23:54:45 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:55.593 23:54:45 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.593 23:54:45 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:55.593 23:54:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.593 23:54:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.593 23:54:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.593 23:54:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.593 23:54:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.593 23:54:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.593 23:54:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.593 23:54:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.593 23:54:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.593 23:54:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.593 23:54:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.593 23:54:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:55.593 23:54:45 json_config -- scripts/common.sh@345 -- # : 1 00:05:55.593 23:54:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.593 23:54:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.593 23:54:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:55.593 23:54:45 json_config -- scripts/common.sh@353 -- # local d=1 00:05:55.593 23:54:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.593 23:54:45 json_config -- scripts/common.sh@355 -- # echo 1 00:05:55.593 23:54:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.593 23:54:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:55.593 23:54:45 json_config -- scripts/common.sh@353 -- # local d=2 00:05:55.593 23:54:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.593 23:54:45 json_config -- scripts/common.sh@355 -- # echo 2 00:05:55.593 23:54:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.593 23:54:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.593 23:54:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.593 23:54:45 json_config -- scripts/common.sh@368 -- # return 0 00:05:55.593 23:54:45 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.593 23:54:45 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:55.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.593 --rc genhtml_branch_coverage=1 00:05:55.593 --rc genhtml_function_coverage=1 00:05:55.593 --rc genhtml_legend=1 00:05:55.593 --rc geninfo_all_blocks=1 00:05:55.593 --rc geninfo_unexecuted_blocks=1 00:05:55.593 00:05:55.593 ' 00:05:55.593 23:54:45 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:55.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.593 --rc genhtml_branch_coverage=1 00:05:55.593 --rc genhtml_function_coverage=1 00:05:55.593 --rc genhtml_legend=1 00:05:55.593 --rc geninfo_all_blocks=1 00:05:55.593 --rc geninfo_unexecuted_blocks=1 00:05:55.593 00:05:55.593 ' 00:05:55.593 23:54:45 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:55.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.593 --rc genhtml_branch_coverage=1 00:05:55.593 --rc genhtml_function_coverage=1 00:05:55.593 --rc genhtml_legend=1 00:05:55.593 --rc geninfo_all_blocks=1 00:05:55.593 --rc geninfo_unexecuted_blocks=1 00:05:55.593 00:05:55.593 ' 00:05:55.593 23:54:45 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:55.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.593 --rc genhtml_branch_coverage=1 00:05:55.593 --rc genhtml_function_coverage=1 00:05:55.593 --rc genhtml_legend=1 00:05:55.593 --rc geninfo_all_blocks=1 00:05:55.593 --rc geninfo_unexecuted_blocks=1 00:05:55.593 00:05:55.593 ' 00:05:55.593 23:54:45 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:55.593 23:54:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:55.593 23:54:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.593 23:54:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.593 23:54:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15ef5cdc-f81d-4909-b4f0-e2a4a086d794 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=15ef5cdc-f81d-4909-b4f0-e2a4a086d794 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:55.594 23:54:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.594 23:54:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.594 23:54:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.594 23:54:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.594 23:54:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.594 23:54:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.594 23:54:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.594 23:54:45 json_config -- paths/export.sh@5 -- # export PATH 00:05:55.594 23:54:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@51 -- # : 0 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.594 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.594 23:54:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.594 23:54:45 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:55.594 23:54:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:55.594 23:54:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:55.594 23:54:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:55.594 23:54:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:55.594 23:54:45 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:55.594 WARNING: No tests are enabled so not running JSON configuration tests 00:05:55.594 23:54:45 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:55.594 00:05:55.594 real 0m0.139s 00:05:55.594 user 0m0.084s 00:05:55.594 sys 0m0.056s 00:05:55.594 23:54:45 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.594 23:54:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.594 ************************************ 00:05:55.594 END TEST json_config 00:05:55.594 ************************************ 00:05:55.594 23:54:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:55.594 23:54:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.594 23:54:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.594 23:54:45 -- common/autotest_common.sh@10 -- # set +x 00:05:55.594 ************************************ 00:05:55.594 START TEST json_config_extra_key 00:05:55.594 ************************************ 00:05:55.594 23:54:45 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:55.594 23:54:45 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.594 23:54:45 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.594 23:54:45 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:55.594 23:54:45 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.594 23:54:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:55.594 23:54:45 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.594 23:54:45 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:55.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.594 --rc genhtml_branch_coverage=1 00:05:55.594 --rc genhtml_function_coverage=1 00:05:55.594 --rc genhtml_legend=1 00:05:55.594 --rc geninfo_all_blocks=1 00:05:55.594 --rc geninfo_unexecuted_blocks=1 00:05:55.594 00:05:55.594 ' 00:05:55.594 23:54:45 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:55.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.594 --rc genhtml_branch_coverage=1 00:05:55.594 --rc genhtml_function_coverage=1 00:05:55.594 --rc genhtml_legend=1 00:05:55.595 --rc geninfo_all_blocks=1 00:05:55.595 --rc geninfo_unexecuted_blocks=1 00:05:55.595 00:05:55.595 ' 00:05:55.595 23:54:45 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:55.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.595 --rc genhtml_branch_coverage=1 00:05:55.595 --rc genhtml_function_coverage=1 00:05:55.595 --rc genhtml_legend=1 00:05:55.595 --rc geninfo_all_blocks=1 00:05:55.595 --rc geninfo_unexecuted_blocks=1 00:05:55.595 00:05:55.595 ' 00:05:55.595 23:54:45 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:55.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.595 --rc genhtml_branch_coverage=1 00:05:55.595 --rc genhtml_function_coverage=1 00:05:55.595 --rc genhtml_legend=1 00:05:55.595 --rc geninfo_all_blocks=1 00:05:55.595 --rc geninfo_unexecuted_blocks=1 00:05:55.595 00:05:55.595 ' 00:05:55.595 23:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:55.595 23:54:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:55.595 23:54:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.595 23:54:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.595 23:54:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.595 23:54:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.595 23:54:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.595 23:54:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.595 23:54:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.595 23:54:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.595 23:54:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.595 23:54:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:15ef5cdc-f81d-4909-b4f0-e2a4a086d794 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=15ef5cdc-f81d-4909-b4f0-e2a4a086d794 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:55.595 23:54:46 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.595 23:54:46 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.595 23:54:46 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.595 23:54:46 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.595 23:54:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.595 23:54:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.595 23:54:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.595 23:54:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:55.595 23:54:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.595 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.595 23:54:46 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.595 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:55.595 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:55.595 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:55.595 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:55.595 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:55.595 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:55.595 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:55.595 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:55.595 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:55.595 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:55.595 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:55.595 INFO: launching applications... 00:05:55.595 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:55.595 23:54:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:55.595 23:54:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:55.595 23:54:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:55.856 23:54:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:55.856 23:54:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:55.856 23:54:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.856 23:54:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:55.856 23:54:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70069 00:05:55.856 23:54:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:55.856 Waiting for target to run... 00:05:55.856 23:54:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70069 /var/tmp/spdk_tgt.sock 00:05:55.856 23:54:46 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70069 ']' 00:05:55.856 23:54:46 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:55.856 23:54:46 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.856 23:54:46 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:55.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:55.856 23:54:46 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.856 23:54:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:55.856 23:54:46 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:55.856 [2024-11-20 23:54:46.087279] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:55.856 [2024-11-20 23:54:46.087537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70069 ] 00:05:56.117 [2024-11-20 23:54:46.373940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.117 [2024-11-20 23:54:46.391705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.684 23:54:46 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.684 23:54:46 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:56.684 23:54:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:56.684 00:05:56.684 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:56.684 INFO: shutting down applications... 00:05:56.684 23:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:56.684 23:54:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:56.684 23:54:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:56.684 23:54:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70069 ]] 00:05:56.684 23:54:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70069 00:05:56.684 23:54:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:56.684 23:54:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.684 23:54:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70069 00:05:56.684 23:54:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:57.250 23:54:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:57.250 23:54:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.250 23:54:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70069 00:05:57.250 23:54:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:57.250 23:54:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:57.250 23:54:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:57.250 SPDK target shutdown done 00:05:57.250 Success 00:05:57.250 23:54:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:57.250 23:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:57.250 00:05:57.250 real 0m1.551s 00:05:57.250 user 0m1.261s 00:05:57.250 sys 0m0.330s 00:05:57.250 ************************************ 00:05:57.250 END TEST json_config_extra_key 00:05:57.250 ************************************ 00:05:57.250 23:54:47 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.250 23:54:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.250 23:54:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.250 23:54:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.250 23:54:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.250 23:54:47 -- common/autotest_common.sh@10 -- # set +x 00:05:57.250 ************************************ 00:05:57.250 START TEST alias_rpc 00:05:57.250 ************************************ 00:05:57.250 23:54:47 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.250 * Looking for test storage... 00:05:57.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:57.250 23:54:47 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:57.250 23:54:47 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:57.250 23:54:47 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:57.250 23:54:47 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:57.250 23:54:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.250 23:54:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.250 23:54:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.250 23:54:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.250 23:54:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.250 23:54:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.250 23:54:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.250 23:54:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.250 23:54:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.250 23:54:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.250 23:54:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.250 23:54:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.251 23:54:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:57.251 23:54:47 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.251 23:54:47 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:57.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.251 --rc genhtml_branch_coverage=1 00:05:57.251 --rc genhtml_function_coverage=1 00:05:57.251 --rc genhtml_legend=1 00:05:57.251 --rc geninfo_all_blocks=1 00:05:57.251 --rc geninfo_unexecuted_blocks=1 00:05:57.251 00:05:57.251 ' 00:05:57.251 23:54:47 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:57.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.251 --rc genhtml_branch_coverage=1 00:05:57.251 --rc genhtml_function_coverage=1 00:05:57.251 --rc genhtml_legend=1 00:05:57.251 --rc geninfo_all_blocks=1 00:05:57.251 --rc geninfo_unexecuted_blocks=1 00:05:57.251 00:05:57.251 ' 00:05:57.251 23:54:47 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:57.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.251 --rc genhtml_branch_coverage=1 00:05:57.251 --rc genhtml_function_coverage=1 00:05:57.251 --rc genhtml_legend=1 00:05:57.251 --rc geninfo_all_blocks=1 00:05:57.251 --rc geninfo_unexecuted_blocks=1 00:05:57.251 00:05:57.251 ' 00:05:57.251 23:54:47 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:57.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.251 --rc genhtml_branch_coverage=1 00:05:57.251 --rc genhtml_function_coverage=1 00:05:57.251 --rc genhtml_legend=1 00:05:57.251 --rc geninfo_all_blocks=1 00:05:57.251 --rc geninfo_unexecuted_blocks=1 00:05:57.251 00:05:57.251 ' 00:05:57.251 23:54:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:57.251 23:54:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70142 00:05:57.251 23:54:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70142 00:05:57.251 23:54:47 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70142 ']' 00:05:57.251 23:54:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.251 23:54:47 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.251 23:54:47 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.251 23:54:47 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.251 23:54:47 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.251 23:54:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.509 [2024-11-20 23:54:47.670736] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:57.509 [2024-11-20 23:54:47.670981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70142 ] 00:05:57.509 [2024-11-20 23:54:47.797752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.509 [2024-11-20 23:54:47.828599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.444 23:54:48 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.444 23:54:48 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.444 23:54:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:58.444 23:54:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70142 00:05:58.444 23:54:48 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70142 ']' 00:05:58.444 23:54:48 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70142 00:05:58.444 23:54:48 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:58.444 23:54:48 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.444 23:54:48 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70142 00:05:58.444 killing process with pid 70142 00:05:58.444 23:54:48 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.444 23:54:48 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.444 23:54:48 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70142' 00:05:58.444 23:54:48 alias_rpc -- common/autotest_common.sh@969 -- # kill 70142 00:05:58.444 23:54:48 alias_rpc -- common/autotest_common.sh@974 -- # wait 70142 00:05:58.703 ************************************ 00:05:58.703 END TEST alias_rpc 00:05:58.703 ************************************ 00:05:58.703 00:05:58.703 real 0m1.557s 00:05:58.703 user 0m1.720s 00:05:58.703 sys 0m0.340s 00:05:58.703 23:54:49 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.703 23:54:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.703 23:54:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:58.703 23:54:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:58.703 23:54:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.703 23:54:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.703 23:54:49 -- common/autotest_common.sh@10 -- # set +x 00:05:58.703 ************************************ 00:05:58.703 START TEST spdkcli_tcp 00:05:58.703 ************************************ 00:05:58.703 23:54:49 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:58.703 * Looking for test storage... 00:05:58.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:58.966 23:54:49 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:58.966 23:54:49 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:58.966 23:54:49 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:58.966 23:54:49 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:58.966 23:54:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.967 23:54:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:58.967 23:54:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.967 23:54:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:58.967 23:54:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:58.967 23:54:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.967 23:54:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:58.967 23:54:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.967 23:54:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.967 23:54:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.967 23:54:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:58.967 23:54:49 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.967 23:54:49 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:58.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.967 --rc genhtml_branch_coverage=1 00:05:58.967 --rc genhtml_function_coverage=1 00:05:58.967 --rc genhtml_legend=1 00:05:58.967 --rc geninfo_all_blocks=1 00:05:58.967 --rc geninfo_unexecuted_blocks=1 00:05:58.967 00:05:58.967 ' 00:05:58.967 23:54:49 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:58.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.967 --rc genhtml_branch_coverage=1 00:05:58.967 --rc genhtml_function_coverage=1 00:05:58.967 --rc genhtml_legend=1 00:05:58.967 --rc geninfo_all_blocks=1 00:05:58.967 --rc geninfo_unexecuted_blocks=1 00:05:58.967 00:05:58.967 ' 00:05:58.967 23:54:49 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:58.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.967 --rc genhtml_branch_coverage=1 00:05:58.967 --rc genhtml_function_coverage=1 00:05:58.967 --rc genhtml_legend=1 00:05:58.967 --rc geninfo_all_blocks=1 00:05:58.967 --rc geninfo_unexecuted_blocks=1 00:05:58.967 00:05:58.967 ' 00:05:58.967 23:54:49 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:58.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.967 --rc genhtml_branch_coverage=1 00:05:58.967 --rc genhtml_function_coverage=1 00:05:58.967 --rc genhtml_legend=1 00:05:58.967 --rc geninfo_all_blocks=1 00:05:58.967 --rc geninfo_unexecuted_blocks=1 00:05:58.967 00:05:58.967 ' 00:05:58.967 23:54:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:58.967 23:54:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:58.968 23:54:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:58.968 23:54:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:58.968 23:54:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:58.968 23:54:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:58.968 23:54:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:58.968 23:54:49 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:58.968 23:54:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.968 23:54:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70222 00:05:58.968 23:54:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70222 00:05:58.968 23:54:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:58.968 23:54:49 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70222 ']' 00:05:58.968 23:54:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.968 23:54:49 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.968 23:54:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.968 23:54:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.968 23:54:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.968 [2024-11-20 23:54:49.273144] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:58.968 [2024-11-20 23:54:49.273423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70222 ] 00:05:59.227 [2024-11-20 23:54:49.410002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.227 [2024-11-20 23:54:49.442334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.227 [2024-11-20 23:54:49.442378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.794 23:54:50 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.794 23:54:50 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:59.794 23:54:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70239 00:05:59.794 23:54:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:59.794 23:54:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:00.053 [ 00:06:00.053 "bdev_malloc_delete", 00:06:00.053 "bdev_malloc_create", 00:06:00.053 "bdev_null_resize", 00:06:00.053 "bdev_null_delete", 00:06:00.053 "bdev_null_create", 00:06:00.053 "bdev_nvme_cuse_unregister", 00:06:00.053 "bdev_nvme_cuse_register", 00:06:00.053 "bdev_opal_new_user", 00:06:00.053 "bdev_opal_set_lock_state", 00:06:00.053 "bdev_opal_delete", 00:06:00.053 "bdev_opal_get_info", 00:06:00.053 "bdev_opal_create", 00:06:00.053 "bdev_nvme_opal_revert", 00:06:00.053 "bdev_nvme_opal_init", 00:06:00.053 "bdev_nvme_send_cmd", 00:06:00.053 "bdev_nvme_set_keys", 00:06:00.053 "bdev_nvme_get_path_iostat", 00:06:00.053 "bdev_nvme_get_mdns_discovery_info", 00:06:00.053 "bdev_nvme_stop_mdns_discovery", 00:06:00.053 "bdev_nvme_start_mdns_discovery", 00:06:00.053 "bdev_nvme_set_multipath_policy", 00:06:00.053 "bdev_nvme_set_preferred_path", 00:06:00.053 "bdev_nvme_get_io_paths", 00:06:00.053 "bdev_nvme_remove_error_injection", 00:06:00.053 "bdev_nvme_add_error_injection", 00:06:00.053 "bdev_nvme_get_discovery_info", 00:06:00.053 "bdev_nvme_stop_discovery", 00:06:00.053 "bdev_nvme_start_discovery", 00:06:00.053 "bdev_nvme_get_controller_health_info", 00:06:00.053 "bdev_nvme_disable_controller", 00:06:00.053 "bdev_nvme_enable_controller", 00:06:00.053 "bdev_nvme_reset_controller", 00:06:00.053 "bdev_nvme_get_transport_statistics", 00:06:00.053 "bdev_nvme_apply_firmware", 00:06:00.053 "bdev_nvme_detach_controller", 00:06:00.053 "bdev_nvme_get_controllers", 00:06:00.053 "bdev_nvme_attach_controller", 00:06:00.053 "bdev_nvme_set_hotplug", 00:06:00.053 "bdev_nvme_set_options", 00:06:00.053 "bdev_passthru_delete", 00:06:00.053 "bdev_passthru_create", 00:06:00.053 "bdev_lvol_set_parent_bdev", 00:06:00.053 "bdev_lvol_set_parent", 00:06:00.053 "bdev_lvol_check_shallow_copy", 00:06:00.053 "bdev_lvol_start_shallow_copy", 00:06:00.053 "bdev_lvol_grow_lvstore", 00:06:00.053 "bdev_lvol_get_lvols", 00:06:00.053 "bdev_lvol_get_lvstores", 00:06:00.053 "bdev_lvol_delete", 00:06:00.053 "bdev_lvol_set_read_only", 00:06:00.053 "bdev_lvol_resize", 00:06:00.053 "bdev_lvol_decouple_parent", 00:06:00.053 "bdev_lvol_inflate", 00:06:00.053 "bdev_lvol_rename", 00:06:00.053 "bdev_lvol_clone_bdev", 00:06:00.053 "bdev_lvol_clone", 00:06:00.053 "bdev_lvol_snapshot", 00:06:00.053 "bdev_lvol_create", 00:06:00.053 "bdev_lvol_delete_lvstore", 00:06:00.053 "bdev_lvol_rename_lvstore", 00:06:00.053 "bdev_lvol_create_lvstore", 00:06:00.053 "bdev_raid_set_options", 00:06:00.053 "bdev_raid_remove_base_bdev", 00:06:00.053 "bdev_raid_add_base_bdev", 00:06:00.053 "bdev_raid_delete", 00:06:00.053 "bdev_raid_create", 00:06:00.053 "bdev_raid_get_bdevs", 00:06:00.053 "bdev_error_inject_error", 00:06:00.053 "bdev_error_delete", 00:06:00.053 "bdev_error_create", 00:06:00.053 "bdev_split_delete", 00:06:00.053 "bdev_split_create", 00:06:00.053 "bdev_delay_delete", 00:06:00.053 "bdev_delay_create", 00:06:00.053 "bdev_delay_update_latency", 00:06:00.053 "bdev_zone_block_delete", 00:06:00.053 "bdev_zone_block_create", 00:06:00.053 "blobfs_create", 00:06:00.053 "blobfs_detect", 00:06:00.053 "blobfs_set_cache_size", 00:06:00.053 "bdev_xnvme_delete", 00:06:00.053 "bdev_xnvme_create", 00:06:00.053 "bdev_aio_delete", 00:06:00.053 "bdev_aio_rescan", 00:06:00.053 "bdev_aio_create", 00:06:00.053 "bdev_ftl_set_property", 00:06:00.053 "bdev_ftl_get_properties", 00:06:00.053 "bdev_ftl_get_stats", 00:06:00.053 "bdev_ftl_unmap", 00:06:00.053 "bdev_ftl_unload", 00:06:00.053 "bdev_ftl_delete", 00:06:00.053 "bdev_ftl_load", 00:06:00.053 "bdev_ftl_create", 00:06:00.053 "bdev_virtio_attach_controller", 00:06:00.053 "bdev_virtio_scsi_get_devices", 00:06:00.053 "bdev_virtio_detach_controller", 00:06:00.053 "bdev_virtio_blk_set_hotplug", 00:06:00.053 "bdev_iscsi_delete", 00:06:00.053 "bdev_iscsi_create", 00:06:00.053 "bdev_iscsi_set_options", 00:06:00.053 "accel_error_inject_error", 00:06:00.053 "ioat_scan_accel_module", 00:06:00.053 "dsa_scan_accel_module", 00:06:00.053 "iaa_scan_accel_module", 00:06:00.053 "keyring_file_remove_key", 00:06:00.053 "keyring_file_add_key", 00:06:00.053 "keyring_linux_set_options", 00:06:00.053 "fsdev_aio_delete", 00:06:00.053 "fsdev_aio_create", 00:06:00.053 "iscsi_get_histogram", 00:06:00.053 "iscsi_enable_histogram", 00:06:00.053 "iscsi_set_options", 00:06:00.053 "iscsi_get_auth_groups", 00:06:00.053 "iscsi_auth_group_remove_secret", 00:06:00.053 "iscsi_auth_group_add_secret", 00:06:00.053 "iscsi_delete_auth_group", 00:06:00.053 "iscsi_create_auth_group", 00:06:00.053 "iscsi_set_discovery_auth", 00:06:00.053 "iscsi_get_options", 00:06:00.053 "iscsi_target_node_request_logout", 00:06:00.053 "iscsi_target_node_set_redirect", 00:06:00.053 "iscsi_target_node_set_auth", 00:06:00.053 "iscsi_target_node_add_lun", 00:06:00.053 "iscsi_get_stats", 00:06:00.053 "iscsi_get_connections", 00:06:00.053 "iscsi_portal_group_set_auth", 00:06:00.053 "iscsi_start_portal_group", 00:06:00.053 "iscsi_delete_portal_group", 00:06:00.053 "iscsi_create_portal_group", 00:06:00.053 "iscsi_get_portal_groups", 00:06:00.053 "iscsi_delete_target_node", 00:06:00.053 "iscsi_target_node_remove_pg_ig_maps", 00:06:00.053 "iscsi_target_node_add_pg_ig_maps", 00:06:00.053 "iscsi_create_target_node", 00:06:00.053 "iscsi_get_target_nodes", 00:06:00.053 "iscsi_delete_initiator_group", 00:06:00.053 "iscsi_initiator_group_remove_initiators", 00:06:00.053 "iscsi_initiator_group_add_initiators", 00:06:00.053 "iscsi_create_initiator_group", 00:06:00.053 "iscsi_get_initiator_groups", 00:06:00.053 "nvmf_set_crdt", 00:06:00.053 "nvmf_set_config", 00:06:00.053 "nvmf_set_max_subsystems", 00:06:00.053 "nvmf_stop_mdns_prr", 00:06:00.053 "nvmf_publish_mdns_prr", 00:06:00.053 "nvmf_subsystem_get_listeners", 00:06:00.053 "nvmf_subsystem_get_qpairs", 00:06:00.053 "nvmf_subsystem_get_controllers", 00:06:00.053 "nvmf_get_stats", 00:06:00.053 "nvmf_get_transports", 00:06:00.053 "nvmf_create_transport", 00:06:00.053 "nvmf_get_targets", 00:06:00.053 "nvmf_delete_target", 00:06:00.053 "nvmf_create_target", 00:06:00.053 "nvmf_subsystem_allow_any_host", 00:06:00.053 "nvmf_subsystem_set_keys", 00:06:00.053 "nvmf_subsystem_remove_host", 00:06:00.053 "nvmf_subsystem_add_host", 00:06:00.053 "nvmf_ns_remove_host", 00:06:00.053 "nvmf_ns_add_host", 00:06:00.053 "nvmf_subsystem_remove_ns", 00:06:00.053 "nvmf_subsystem_set_ns_ana_group", 00:06:00.053 "nvmf_subsystem_add_ns", 00:06:00.053 "nvmf_subsystem_listener_set_ana_state", 00:06:00.053 "nvmf_discovery_get_referrals", 00:06:00.053 "nvmf_discovery_remove_referral", 00:06:00.053 "nvmf_discovery_add_referral", 00:06:00.053 "nvmf_subsystem_remove_listener", 00:06:00.053 "nvmf_subsystem_add_listener", 00:06:00.053 "nvmf_delete_subsystem", 00:06:00.053 "nvmf_create_subsystem", 00:06:00.053 "nvmf_get_subsystems", 00:06:00.053 "env_dpdk_get_mem_stats", 00:06:00.053 "nbd_get_disks", 00:06:00.053 "nbd_stop_disk", 00:06:00.053 "nbd_start_disk", 00:06:00.053 "ublk_recover_disk", 00:06:00.053 "ublk_get_disks", 00:06:00.053 "ublk_stop_disk", 00:06:00.053 "ublk_start_disk", 00:06:00.053 "ublk_destroy_target", 00:06:00.053 "ublk_create_target", 00:06:00.053 "virtio_blk_create_transport", 00:06:00.053 "virtio_blk_get_transports", 00:06:00.053 "vhost_controller_set_coalescing", 00:06:00.053 "vhost_get_controllers", 00:06:00.053 "vhost_delete_controller", 00:06:00.053 "vhost_create_blk_controller", 00:06:00.053 "vhost_scsi_controller_remove_target", 00:06:00.053 "vhost_scsi_controller_add_target", 00:06:00.053 "vhost_start_scsi_controller", 00:06:00.053 "vhost_create_scsi_controller", 00:06:00.053 "thread_set_cpumask", 00:06:00.053 "scheduler_set_options", 00:06:00.053 "framework_get_governor", 00:06:00.053 "framework_get_scheduler", 00:06:00.053 "framework_set_scheduler", 00:06:00.053 "framework_get_reactors", 00:06:00.053 "thread_get_io_channels", 00:06:00.053 "thread_get_pollers", 00:06:00.053 "thread_get_stats", 00:06:00.053 "framework_monitor_context_switch", 00:06:00.053 "spdk_kill_instance", 00:06:00.053 "log_enable_timestamps", 00:06:00.053 "log_get_flags", 00:06:00.053 "log_clear_flag", 00:06:00.053 "log_set_flag", 00:06:00.053 "log_get_level", 00:06:00.053 "log_set_level", 00:06:00.053 "log_get_print_level", 00:06:00.053 "log_set_print_level", 00:06:00.053 "framework_enable_cpumask_locks", 00:06:00.053 "framework_disable_cpumask_locks", 00:06:00.053 "framework_wait_init", 00:06:00.053 "framework_start_init", 00:06:00.053 "scsi_get_devices", 00:06:00.053 "bdev_get_histogram", 00:06:00.053 "bdev_enable_histogram", 00:06:00.054 "bdev_set_qos_limit", 00:06:00.054 "bdev_set_qd_sampling_period", 00:06:00.054 "bdev_get_bdevs", 00:06:00.054 "bdev_reset_iostat", 00:06:00.054 "bdev_get_iostat", 00:06:00.054 "bdev_examine", 00:06:00.054 "bdev_wait_for_examine", 00:06:00.054 "bdev_set_options", 00:06:00.054 "accel_get_stats", 00:06:00.054 "accel_set_options", 00:06:00.054 "accel_set_driver", 00:06:00.054 "accel_crypto_key_destroy", 00:06:00.054 "accel_crypto_keys_get", 00:06:00.054 "accel_crypto_key_create", 00:06:00.054 "accel_assign_opc", 00:06:00.054 "accel_get_module_info", 00:06:00.054 "accel_get_opc_assignments", 00:06:00.054 "vmd_rescan", 00:06:00.054 "vmd_remove_device", 00:06:00.054 "vmd_enable", 00:06:00.054 "sock_get_default_impl", 00:06:00.054 "sock_set_default_impl", 00:06:00.054 "sock_impl_set_options", 00:06:00.054 "sock_impl_get_options", 00:06:00.054 "iobuf_get_stats", 00:06:00.054 "iobuf_set_options", 00:06:00.054 "keyring_get_keys", 00:06:00.054 "framework_get_pci_devices", 00:06:00.054 "framework_get_config", 00:06:00.054 "framework_get_subsystems", 00:06:00.054 "fsdev_set_opts", 00:06:00.054 "fsdev_get_opts", 00:06:00.054 "trace_get_info", 00:06:00.054 "trace_get_tpoint_group_mask", 00:06:00.054 "trace_disable_tpoint_group", 00:06:00.054 "trace_enable_tpoint_group", 00:06:00.054 "trace_clear_tpoint_mask", 00:06:00.054 "trace_set_tpoint_mask", 00:06:00.054 "notify_get_notifications", 00:06:00.054 "notify_get_types", 00:06:00.054 "spdk_get_version", 00:06:00.054 "rpc_get_methods" 00:06:00.054 ] 00:06:00.054 23:54:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:00.054 23:54:50 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:00.054 23:54:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.054 23:54:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:00.054 23:54:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70222 00:06:00.054 23:54:50 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70222 ']' 00:06:00.054 23:54:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70222 00:06:00.054 23:54:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:00.054 23:54:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.054 23:54:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70222 00:06:00.054 killing process with pid 70222 00:06:00.054 23:54:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.054 23:54:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.054 23:54:50 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70222' 00:06:00.054 23:54:50 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70222 00:06:00.054 23:54:50 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70222 00:06:00.315 ************************************ 00:06:00.315 END TEST spdkcli_tcp 00:06:00.315 ************************************ 00:06:00.315 00:06:00.315 real 0m1.550s 00:06:00.315 user 0m2.767s 00:06:00.315 sys 0m0.383s 00:06:00.315 23:54:50 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.315 23:54:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.315 23:54:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.315 23:54:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.315 23:54:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.315 23:54:50 -- common/autotest_common.sh@10 -- # set +x 00:06:00.315 ************************************ 00:06:00.315 START TEST dpdk_mem_utility 00:06:00.315 ************************************ 00:06:00.315 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.315 * Looking for test storage... 00:06:00.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:00.315 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.315 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:00.315 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.575 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.575 23:54:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:00.575 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.575 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:00.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.575 --rc genhtml_branch_coverage=1 00:06:00.575 --rc genhtml_function_coverage=1 00:06:00.575 --rc genhtml_legend=1 00:06:00.575 --rc geninfo_all_blocks=1 00:06:00.575 --rc geninfo_unexecuted_blocks=1 00:06:00.575 00:06:00.575 ' 00:06:00.575 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:00.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.575 --rc genhtml_branch_coverage=1 00:06:00.575 --rc genhtml_function_coverage=1 00:06:00.575 --rc genhtml_legend=1 00:06:00.575 --rc geninfo_all_blocks=1 00:06:00.575 --rc geninfo_unexecuted_blocks=1 00:06:00.575 00:06:00.575 ' 00:06:00.575 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:00.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.575 --rc genhtml_branch_coverage=1 00:06:00.575 --rc genhtml_function_coverage=1 00:06:00.575 --rc genhtml_legend=1 00:06:00.575 --rc geninfo_all_blocks=1 00:06:00.575 --rc geninfo_unexecuted_blocks=1 00:06:00.575 00:06:00.575 ' 00:06:00.575 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:00.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.575 --rc genhtml_branch_coverage=1 00:06:00.575 --rc genhtml_function_coverage=1 00:06:00.575 --rc genhtml_legend=1 00:06:00.575 --rc geninfo_all_blocks=1 00:06:00.575 --rc geninfo_unexecuted_blocks=1 00:06:00.575 00:06:00.575 ' 00:06:00.575 23:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:00.575 23:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70316 00:06:00.575 23:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70316 00:06:00.575 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70316 ']' 00:06:00.575 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.575 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.575 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.575 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.575 23:54:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.575 23:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.575 [2024-11-20 23:54:50.856859] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:00.575 [2024-11-20 23:54:50.856982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70316 ] 00:06:00.575 [2024-11-20 23:54:50.991450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.834 [2024-11-20 23:54:51.022989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.398 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.398 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:01.398 23:54:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:01.398 23:54:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:01.398 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.398 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.398 { 00:06:01.398 "filename": "/tmp/spdk_mem_dump.txt" 00:06:01.398 } 00:06:01.398 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.398 23:54:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:01.398 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:01.398 1 heaps totaling size 860.000000 MiB 00:06:01.398 size: 860.000000 MiB heap id: 0 00:06:01.398 end heaps---------- 00:06:01.398 9 mempools totaling size 642.649841 MiB 00:06:01.398 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:01.398 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:01.398 size: 92.545471 MiB name: bdev_io_70316 00:06:01.398 size: 51.011292 MiB name: evtpool_70316 00:06:01.398 size: 50.003479 MiB name: msgpool_70316 00:06:01.398 size: 36.509338 MiB name: fsdev_io_70316 00:06:01.398 size: 21.763794 MiB name: PDU_Pool 00:06:01.398 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:01.398 size: 0.026123 MiB name: Session_Pool 00:06:01.398 end mempools------- 00:06:01.398 6 memzones totaling size 4.142822 MiB 00:06:01.398 size: 1.000366 MiB name: RG_ring_0_70316 00:06:01.398 size: 1.000366 MiB name: RG_ring_1_70316 00:06:01.398 size: 1.000366 MiB name: RG_ring_4_70316 00:06:01.398 size: 1.000366 MiB name: RG_ring_5_70316 00:06:01.398 size: 0.125366 MiB name: RG_ring_2_70316 00:06:01.398 size: 0.015991 MiB name: RG_ring_3_70316 00:06:01.398 end memzones------- 00:06:01.398 23:54:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:01.398 heap id: 0 total size: 860.000000 MiB number of busy elements: 316 number of free elements: 16 00:06:01.398 list of free elements. size: 13.934875 MiB 00:06:01.398 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:01.398 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:01.398 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:01.398 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:01.398 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:01.398 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:01.398 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:01.398 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:01.398 element at address: 0x200000200000 with size: 0.835022 MiB 00:06:01.398 element at address: 0x20001d800000 with size: 0.566956 MiB 00:06:01.398 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:01.398 element at address: 0x200003e00000 with size: 0.487732 MiB 00:06:01.398 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:01.398 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:01.398 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:01.398 element at address: 0x200003a00000 with size: 0.352844 MiB 00:06:01.399 list of standard malloc elements. size: 199.268433 MiB 00:06:01.399 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:01.399 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:01.399 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:01.399 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:01.399 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:01.399 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:01.399 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:01.399 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:01.399 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:01.399 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a5a540 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a5ea00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:01.399 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891240 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891300 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:01.399 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:01.400 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:01.400 list of memzone associated elements. size: 646.796692 MiB 00:06:01.400 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:01.400 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:01.400 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:01.400 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:01.400 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:01.400 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70316_0 00:06:01.400 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:01.400 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70316_0 00:06:01.400 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:01.400 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70316_0 00:06:01.400 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:01.400 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70316_0 00:06:01.400 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:01.400 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:01.400 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:01.400 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:01.400 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:01.400 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70316 00:06:01.400 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:01.400 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70316 00:06:01.400 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:01.400 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70316 00:06:01.400 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:01.400 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:01.400 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:01.400 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:01.400 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:01.400 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:01.400 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:01.400 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:01.400 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:01.400 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70316 00:06:01.400 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:01.400 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70316 00:06:01.400 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:01.400 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70316 00:06:01.400 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:01.400 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70316 00:06:01.400 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:01.400 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70316 00:06:01.400 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:01.400 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70316 00:06:01.400 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:01.400 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:01.400 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:01.400 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:01.400 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:01.400 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:01.400 element at address: 0x200003a5eac0 with size: 0.125488 MiB 00:06:01.400 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70316 00:06:01.400 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:01.400 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:01.400 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:01.400 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:01.400 element at address: 0x200003a5a800 with size: 0.016113 MiB 00:06:01.400 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70316 00:06:01.400 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:01.400 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:01.400 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:01.400 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70316 00:06:01.400 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:01.400 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70316 00:06:01.400 element at address: 0x200003a5a600 with size: 0.000305 MiB 00:06:01.400 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70316 00:06:01.400 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:01.400 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:01.400 23:54:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:01.400 23:54:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70316 00:06:01.400 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70316 ']' 00:06:01.400 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70316 00:06:01.400 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:01.400 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.400 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70316 00:06:01.656 killing process with pid 70316 00:06:01.656 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.656 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.656 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70316' 00:06:01.656 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70316 00:06:01.656 23:54:51 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70316 00:06:01.914 00:06:01.914 real 0m1.441s 00:06:01.914 user 0m1.507s 00:06:01.914 sys 0m0.337s 00:06:01.914 23:54:52 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.914 ************************************ 00:06:01.914 END TEST dpdk_mem_utility 00:06:01.914 ************************************ 00:06:01.914 23:54:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.914 23:54:52 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:01.914 23:54:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.914 23:54:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.914 23:54:52 -- common/autotest_common.sh@10 -- # set +x 00:06:01.914 ************************************ 00:06:01.914 START TEST event 00:06:01.914 ************************************ 00:06:01.914 23:54:52 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:01.914 * Looking for test storage... 00:06:01.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:01.914 23:54:52 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:01.914 23:54:52 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:01.914 23:54:52 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:01.914 23:54:52 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:01.914 23:54:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.914 23:54:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.914 23:54:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.914 23:54:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.915 23:54:52 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.915 23:54:52 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.915 23:54:52 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.915 23:54:52 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.915 23:54:52 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.915 23:54:52 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.915 23:54:52 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.915 23:54:52 event -- scripts/common.sh@344 -- # case "$op" in 00:06:01.915 23:54:52 event -- scripts/common.sh@345 -- # : 1 00:06:01.915 23:54:52 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.915 23:54:52 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.915 23:54:52 event -- scripts/common.sh@365 -- # decimal 1 00:06:01.915 23:54:52 event -- scripts/common.sh@353 -- # local d=1 00:06:01.915 23:54:52 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.915 23:54:52 event -- scripts/common.sh@355 -- # echo 1 00:06:01.915 23:54:52 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.915 23:54:52 event -- scripts/common.sh@366 -- # decimal 2 00:06:01.915 23:54:52 event -- scripts/common.sh@353 -- # local d=2 00:06:01.915 23:54:52 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.915 23:54:52 event -- scripts/common.sh@355 -- # echo 2 00:06:01.915 23:54:52 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.915 23:54:52 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.915 23:54:52 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.915 23:54:52 event -- scripts/common.sh@368 -- # return 0 00:06:01.915 23:54:52 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.915 23:54:52 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:01.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.915 --rc genhtml_branch_coverage=1 00:06:01.915 --rc genhtml_function_coverage=1 00:06:01.915 --rc genhtml_legend=1 00:06:01.915 --rc geninfo_all_blocks=1 00:06:01.915 --rc geninfo_unexecuted_blocks=1 00:06:01.915 00:06:01.915 ' 00:06:01.915 23:54:52 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:01.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.915 --rc genhtml_branch_coverage=1 00:06:01.915 --rc genhtml_function_coverage=1 00:06:01.915 --rc genhtml_legend=1 00:06:01.915 --rc geninfo_all_blocks=1 00:06:01.915 --rc geninfo_unexecuted_blocks=1 00:06:01.915 00:06:01.915 ' 00:06:01.915 23:54:52 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:01.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.915 --rc genhtml_branch_coverage=1 00:06:01.915 --rc genhtml_function_coverage=1 00:06:01.915 --rc genhtml_legend=1 00:06:01.915 --rc geninfo_all_blocks=1 00:06:01.915 --rc geninfo_unexecuted_blocks=1 00:06:01.915 00:06:01.915 ' 00:06:01.915 23:54:52 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:01.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.915 --rc genhtml_branch_coverage=1 00:06:01.915 --rc genhtml_function_coverage=1 00:06:01.915 --rc genhtml_legend=1 00:06:01.915 --rc geninfo_all_blocks=1 00:06:01.915 --rc geninfo_unexecuted_blocks=1 00:06:01.915 00:06:01.915 ' 00:06:01.915 23:54:52 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:01.915 23:54:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:01.915 23:54:52 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:01.915 23:54:52 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:01.915 23:54:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.915 23:54:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.915 ************************************ 00:06:01.915 START TEST event_perf 00:06:01.915 ************************************ 00:06:01.915 23:54:52 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:01.915 Running I/O for 1 seconds...[2024-11-20 23:54:52.294071] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:01.915 [2024-11-20 23:54:52.294274] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70397 ] 00:06:02.172 [2024-11-20 23:54:52.430022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.172 [2024-11-20 23:54:52.464176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.172 [2024-11-20 23:54:52.464490] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.172 [2024-11-20 23:54:52.464553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.172 Running I/O for 1 seconds...[2024-11-20 23:54:52.464501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.104 00:06:03.104 lcore 0: 191496 00:06:03.104 lcore 1: 191492 00:06:03.104 lcore 2: 191491 00:06:03.104 lcore 3: 191494 00:06:03.104 done. 00:06:03.104 00:06:03.104 real 0m1.251s 00:06:03.104 user 0m4.068s 00:06:03.104 sys 0m0.067s 00:06:03.104 23:54:53 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.104 ************************************ 00:06:03.104 END TEST event_perf 00:06:03.104 ************************************ 00:06:03.104 23:54:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.361 23:54:53 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:03.361 23:54:53 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:03.361 23:54:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.361 23:54:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.361 ************************************ 00:06:03.361 START TEST event_reactor 00:06:03.361 ************************************ 00:06:03.361 23:54:53 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:03.361 [2024-11-20 23:54:53.586863] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:03.361 [2024-11-20 23:54:53.586979] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70431 ] 00:06:03.361 [2024-11-20 23:54:53.716944] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.361 [2024-11-20 23:54:53.748257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.733 test_start 00:06:04.733 oneshot 00:06:04.733 tick 100 00:06:04.733 tick 100 00:06:04.733 tick 250 00:06:04.733 tick 100 00:06:04.733 tick 100 00:06:04.733 tick 100 00:06:04.733 tick 250 00:06:04.733 tick 500 00:06:04.733 tick 100 00:06:04.733 tick 100 00:06:04.733 tick 250 00:06:04.733 tick 100 00:06:04.733 tick 100 00:06:04.733 test_end 00:06:04.733 00:06:04.733 real 0m1.239s 00:06:04.733 user 0m1.082s 00:06:04.733 sys 0m0.049s 00:06:04.733 23:54:54 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.733 ************************************ 00:06:04.733 END TEST event_reactor 00:06:04.733 ************************************ 00:06:04.733 23:54:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:04.733 23:54:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.733 23:54:54 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:04.733 23:54:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.733 23:54:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.733 ************************************ 00:06:04.733 START TEST event_reactor_perf 00:06:04.733 ************************************ 00:06:04.733 23:54:54 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.733 [2024-11-20 23:54:54.866676] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:04.733 [2024-11-20 23:54:54.866787] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70467 ] 00:06:04.733 [2024-11-20 23:54:55.001584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.733 [2024-11-20 23:54:55.031459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.667 test_start 00:06:05.667 test_end 00:06:05.667 Performance: 314287 events per second 00:06:05.926 ************************************ 00:06:05.926 END TEST event_reactor_perf 00:06:05.926 ************************************ 00:06:05.926 00:06:05.926 real 0m1.243s 00:06:05.926 user 0m1.079s 00:06:05.926 sys 0m0.058s 00:06:05.926 23:54:56 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.926 23:54:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.926 23:54:56 event -- event/event.sh@49 -- # uname -s 00:06:05.926 23:54:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:05.926 23:54:56 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:05.926 23:54:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.926 23:54:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.926 23:54:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.926 ************************************ 00:06:05.926 START TEST event_scheduler 00:06:05.926 ************************************ 00:06:05.926 23:54:56 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:05.926 * Looking for test storage... 00:06:05.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:05.926 23:54:56 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.926 23:54:56 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.926 23:54:56 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.926 23:54:56 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.926 23:54:56 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:05.927 23:54:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:05.927 23:54:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.927 23:54:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:05.927 23:54:56 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.927 23:54:56 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.927 23:54:56 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.927 23:54:56 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:05.927 23:54:56 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.927 23:54:56 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:05.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.927 --rc genhtml_branch_coverage=1 00:06:05.927 --rc genhtml_function_coverage=1 00:06:05.927 --rc genhtml_legend=1 00:06:05.927 --rc geninfo_all_blocks=1 00:06:05.927 --rc geninfo_unexecuted_blocks=1 00:06:05.927 00:06:05.927 ' 00:06:05.927 23:54:56 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:05.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.927 --rc genhtml_branch_coverage=1 00:06:05.927 --rc genhtml_function_coverage=1 00:06:05.927 --rc genhtml_legend=1 00:06:05.927 --rc geninfo_all_blocks=1 00:06:05.927 --rc geninfo_unexecuted_blocks=1 00:06:05.927 00:06:05.927 ' 00:06:05.927 23:54:56 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:05.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.927 --rc genhtml_branch_coverage=1 00:06:05.927 --rc genhtml_function_coverage=1 00:06:05.927 --rc genhtml_legend=1 00:06:05.927 --rc geninfo_all_blocks=1 00:06:05.927 --rc geninfo_unexecuted_blocks=1 00:06:05.927 00:06:05.927 ' 00:06:05.927 23:54:56 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:05.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.927 --rc genhtml_branch_coverage=1 00:06:05.927 --rc genhtml_function_coverage=1 00:06:05.927 --rc genhtml_legend=1 00:06:05.927 --rc geninfo_all_blocks=1 00:06:05.927 --rc geninfo_unexecuted_blocks=1 00:06:05.927 00:06:05.927 ' 00:06:05.927 23:54:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:05.927 23:54:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70538 00:06:05.927 23:54:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.927 23:54:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:05.927 23:54:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70538 00:06:05.927 23:54:56 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70538 ']' 00:06:05.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.927 23:54:56 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.927 23:54:56 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.927 23:54:56 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.927 23:54:56 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.927 23:54:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.927 [2024-11-20 23:54:56.321968] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:05.927 [2024-11-20 23:54:56.322198] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70538 ] 00:06:06.185 [2024-11-20 23:54:56.458127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.185 [2024-11-20 23:54:56.491941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.185 [2024-11-20 23:54:56.492396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.185 [2024-11-20 23:54:56.492475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.185 [2024-11-20 23:54:56.492546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.752 23:54:57 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.752 23:54:57 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:06.752 23:54:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:06.752 23:54:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.752 23:54:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.752 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:06.752 POWER: Cannot set governor of lcore 0 to userspace 00:06:06.752 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:06.752 POWER: Cannot set governor of lcore 0 to performance 00:06:06.752 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:06.752 POWER: Cannot set governor of lcore 0 to userspace 00:06:06.752 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:06.752 POWER: Unable to set Power Management Environment for lcore 0 00:06:06.752 [2024-11-20 23:54:57.166231] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:06.752 [2024-11-20 23:54:57.166318] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:06.752 [2024-11-20 23:54:57.166386] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:06.752 [2024-11-20 23:54:57.166429] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:06.752 [2024-11-20 23:54:57.166450] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:06.752 [2024-11-20 23:54:57.166523] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:06.752 23:54:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.752 23:54:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:06.752 23:54:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.752 23:54:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.026 [2024-11-20 23:54:57.220187] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:07.026 23:54:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.026 23:54:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:07.026 23:54:57 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.026 23:54:57 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.026 23:54:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.026 ************************************ 00:06:07.026 START TEST scheduler_create_thread 00:06:07.026 ************************************ 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.026 2 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.026 3 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.026 4 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.026 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.026 5 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.027 6 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.027 7 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.027 8 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.027 9 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.027 10 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.027 23:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.439 23:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.439 23:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:08.439 23:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:08.439 23:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.439 23:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.819 ************************************ 00:06:09.819 END TEST scheduler_create_thread 00:06:09.819 ************************************ 00:06:09.819 23:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.819 00:06:09.819 real 0m2.613s 00:06:09.819 user 0m0.015s 00:06:09.819 sys 0m0.004s 00:06:09.819 23:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.819 23:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.819 23:54:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:09.819 23:54:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70538 00:06:09.819 23:54:59 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70538 ']' 00:06:09.819 23:54:59 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70538 00:06:09.819 23:54:59 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:09.819 23:54:59 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.819 23:54:59 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70538 00:06:09.819 killing process with pid 70538 00:06:09.819 23:54:59 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:09.819 23:54:59 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:09.819 23:54:59 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70538' 00:06:09.819 23:54:59 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70538 00:06:09.819 23:54:59 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70538 00:06:10.077 [2024-11-20 23:55:00.323464] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:10.077 00:06:10.077 real 0m4.347s 00:06:10.077 user 0m8.047s 00:06:10.077 sys 0m0.319s 00:06:10.077 ************************************ 00:06:10.077 END TEST event_scheduler 00:06:10.077 ************************************ 00:06:10.077 23:55:00 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.077 23:55:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.336 23:55:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:10.336 23:55:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:10.336 23:55:00 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.336 23:55:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.336 23:55:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.336 ************************************ 00:06:10.336 START TEST app_repeat 00:06:10.336 ************************************ 00:06:10.336 23:55:00 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:10.336 Process app_repeat pid: 70633 00:06:10.336 spdk_app_start Round 0 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70633 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70633' 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70633 /var/tmp/spdk-nbd.sock 00:06:10.336 23:55:00 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:10.336 23:55:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70633 ']' 00:06:10.336 23:55:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.336 23:55:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.336 23:55:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.336 23:55:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.336 23:55:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.336 [2024-11-20 23:55:00.552894] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:10.336 [2024-11-20 23:55:00.553018] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70633 ] 00:06:10.336 [2024-11-20 23:55:00.689029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.336 [2024-11-20 23:55:00.720927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.336 [2024-11-20 23:55:00.721004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.269 23:55:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.269 23:55:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:11.269 23:55:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.269 Malloc0 00:06:11.269 23:55:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.527 Malloc1 00:06:11.527 23:55:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.527 23:55:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.786 /dev/nbd0 00:06:11.786 23:55:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.786 23:55:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.786 1+0 records in 00:06:11.786 1+0 records out 00:06:11.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320923 s, 12.8 MB/s 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.786 23:55:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:11.786 23:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.786 23:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.786 23:55:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.045 /dev/nbd1 00:06:12.045 23:55:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.045 23:55:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.045 1+0 records in 00:06:12.045 1+0 records out 00:06:12.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181578 s, 22.6 MB/s 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.045 23:55:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:12.045 23:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.045 23:55:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.045 23:55:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.045 23:55:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.045 23:55:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.304 23:55:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.304 { 00:06:12.304 "nbd_device": "/dev/nbd0", 00:06:12.304 "bdev_name": "Malloc0" 00:06:12.304 }, 00:06:12.304 { 00:06:12.304 "nbd_device": "/dev/nbd1", 00:06:12.304 "bdev_name": "Malloc1" 00:06:12.304 } 00:06:12.304 ]' 00:06:12.304 23:55:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.304 { 00:06:12.304 "nbd_device": "/dev/nbd0", 00:06:12.304 "bdev_name": "Malloc0" 00:06:12.304 }, 00:06:12.304 { 00:06:12.304 "nbd_device": "/dev/nbd1", 00:06:12.304 "bdev_name": "Malloc1" 00:06:12.304 } 00:06:12.304 ]' 00:06:12.304 23:55:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.304 23:55:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.304 /dev/nbd1' 00:06:12.304 23:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.304 /dev/nbd1' 00:06:12.304 23:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.304 23:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.304 23:55:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.304 23:55:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.304 23:55:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.304 23:55:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.304 23:55:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.305 256+0 records in 00:06:12.305 256+0 records out 00:06:12.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137209 s, 76.4 MB/s 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.305 256+0 records in 00:06:12.305 256+0 records out 00:06:12.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159056 s, 65.9 MB/s 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.305 256+0 records in 00:06:12.305 256+0 records out 00:06:12.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150655 s, 69.6 MB/s 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.305 23:55:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.564 23:55:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.564 23:55:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.564 23:55:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.564 23:55:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.564 23:55:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.564 23:55:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.564 23:55:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.564 23:55:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.564 23:55:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.564 23:55:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.823 23:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.082 23:55:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.082 23:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.082 23:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.082 23:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.082 23:55:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.082 23:55:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.082 23:55:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.082 23:55:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.082 23:55:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.082 23:55:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.082 23:55:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.341 [2024-11-20 23:55:03.565351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.341 [2024-11-20 23:55:03.593713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.341 [2024-11-20 23:55:03.593820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.341 [2024-11-20 23:55:03.623529] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.341 [2024-11-20 23:55:03.623584] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.625 spdk_app_start Round 1 00:06:16.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.625 23:55:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:16.625 23:55:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:16.625 23:55:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70633 /var/tmp/spdk-nbd.sock 00:06:16.625 23:55:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70633 ']' 00:06:16.625 23:55:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.625 23:55:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.625 23:55:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.625 23:55:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.625 23:55:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.625 23:55:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.625 23:55:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:16.625 23:55:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.625 Malloc0 00:06:16.625 23:55:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.921 Malloc1 00:06:16.921 23:55:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.921 /dev/nbd0 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.921 1+0 records in 00:06:16.921 1+0 records out 00:06:16.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407056 s, 10.1 MB/s 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.921 23:55:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.921 23:55:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.206 /dev/nbd1 00:06:17.206 23:55:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.206 23:55:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.206 1+0 records in 00:06:17.206 1+0 records out 00:06:17.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179676 s, 22.8 MB/s 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:17.206 23:55:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:17.206 23:55:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.206 23:55:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.206 23:55:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.206 23:55:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.206 23:55:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.465 { 00:06:17.465 "nbd_device": "/dev/nbd0", 00:06:17.465 "bdev_name": "Malloc0" 00:06:17.465 }, 00:06:17.465 { 00:06:17.465 "nbd_device": "/dev/nbd1", 00:06:17.465 "bdev_name": "Malloc1" 00:06:17.465 } 00:06:17.465 ]' 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.465 { 00:06:17.465 "nbd_device": "/dev/nbd0", 00:06:17.465 "bdev_name": "Malloc0" 00:06:17.465 }, 00:06:17.465 { 00:06:17.465 "nbd_device": "/dev/nbd1", 00:06:17.465 "bdev_name": "Malloc1" 00:06:17.465 } 00:06:17.465 ]' 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.465 /dev/nbd1' 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.465 /dev/nbd1' 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.465 256+0 records in 00:06:17.465 256+0 records out 00:06:17.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0098807 s, 106 MB/s 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.465 256+0 records in 00:06:17.465 256+0 records out 00:06:17.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146074 s, 71.8 MB/s 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.465 256+0 records in 00:06:17.465 256+0 records out 00:06:17.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166728 s, 62.9 MB/s 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.465 23:55:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.724 23:55:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.724 23:55:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.724 23:55:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.724 23:55:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.724 23:55:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.724 23:55:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.724 23:55:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.724 23:55:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.724 23:55:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.724 23:55:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.981 23:55:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.981 23:55:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.981 23:55:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.981 23:55:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.981 23:55:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.981 23:55:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.981 23:55:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.981 23:55:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.981 23:55:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.981 23:55:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.981 23:55:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.239 23:55:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.239 23:55:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.239 23:55:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.239 23:55:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.239 23:55:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.239 23:55:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.239 23:55:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:18.239 23:55:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.239 23:55:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.239 23:55:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.239 23:55:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.239 23:55:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.239 23:55:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.498 23:55:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.498 [2024-11-20 23:55:08.789532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.498 [2024-11-20 23:55:08.818519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.498 [2024-11-20 23:55:08.818615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.498 [2024-11-20 23:55:08.848042] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.498 [2024-11-20 23:55:08.848087] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.780 23:55:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:21.781 spdk_app_start Round 2 00:06:21.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.781 23:55:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:21.781 23:55:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70633 /var/tmp/spdk-nbd.sock 00:06:21.781 23:55:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70633 ']' 00:06:21.781 23:55:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.781 23:55:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.781 23:55:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.781 23:55:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.781 23:55:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.781 23:55:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.781 23:55:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:21.781 23:55:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.781 Malloc0 00:06:21.781 23:55:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.039 Malloc1 00:06:22.039 23:55:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.039 23:55:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.298 /dev/nbd0 00:06:22.298 23:55:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.298 23:55:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.298 1+0 records in 00:06:22.298 1+0 records out 00:06:22.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454738 s, 9.0 MB/s 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:22.298 23:55:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:22.298 23:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.298 23:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.298 23:55:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.557 /dev/nbd1 00:06:22.557 23:55:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.557 23:55:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.557 1+0 records in 00:06:22.557 1+0 records out 00:06:22.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000154988 s, 26.4 MB/s 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:22.557 23:55:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:22.557 23:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.557 23:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.558 23:55:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.558 23:55:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.558 23:55:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.817 { 00:06:22.817 "nbd_device": "/dev/nbd0", 00:06:22.817 "bdev_name": "Malloc0" 00:06:22.817 }, 00:06:22.817 { 00:06:22.817 "nbd_device": "/dev/nbd1", 00:06:22.817 "bdev_name": "Malloc1" 00:06:22.817 } 00:06:22.817 ]' 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.817 { 00:06:22.817 "nbd_device": "/dev/nbd0", 00:06:22.817 "bdev_name": "Malloc0" 00:06:22.817 }, 00:06:22.817 { 00:06:22.817 "nbd_device": "/dev/nbd1", 00:06:22.817 "bdev_name": "Malloc1" 00:06:22.817 } 00:06:22.817 ]' 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.817 /dev/nbd1' 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.817 /dev/nbd1' 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.817 256+0 records in 00:06:22.817 256+0 records out 00:06:22.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116483 s, 90.0 MB/s 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.817 256+0 records in 00:06:22.817 256+0 records out 00:06:22.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152181 s, 68.9 MB/s 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.817 256+0 records in 00:06:22.817 256+0 records out 00:06:22.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174051 s, 60.2 MB/s 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.817 23:55:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.075 23:55:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.075 23:55:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.075 23:55:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.075 23:55:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.075 23:55:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.075 23:55:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.075 23:55:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.075 23:55:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.075 23:55:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.075 23:55:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.333 23:55:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.333 23:55:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.333 23:55:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.333 23:55:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.333 23:55:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.333 23:55:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.333 23:55:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.333 23:55:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.333 23:55:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.333 23:55:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.333 23:55:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.591 23:55:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.591 23:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.591 23:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.591 23:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.591 23:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.591 23:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.591 23:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.591 23:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.591 23:55:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.591 23:55:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.591 23:55:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.591 23:55:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.591 23:55:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.850 23:55:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.850 [2024-11-20 23:55:14.131017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.850 [2024-11-20 23:55:14.161700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.850 [2024-11-20 23:55:14.161729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.850 [2024-11-20 23:55:14.193748] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.850 [2024-11-20 23:55:14.193794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.186 23:55:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70633 /var/tmp/spdk-nbd.sock 00:06:27.186 23:55:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70633 ']' 00:06:27.186 23:55:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.186 23:55:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.186 23:55:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.186 23:55:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.186 23:55:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.186 23:55:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.186 23:55:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:27.186 23:55:17 event.app_repeat -- event/event.sh@39 -- # killprocess 70633 00:06:27.186 23:55:17 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70633 ']' 00:06:27.187 23:55:17 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70633 00:06:27.187 23:55:17 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:27.187 23:55:17 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.187 23:55:17 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70633 00:06:27.187 killing process with pid 70633 00:06:27.187 23:55:17 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.187 23:55:17 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.187 23:55:17 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70633' 00:06:27.187 23:55:17 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70633 00:06:27.187 23:55:17 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70633 00:06:27.187 spdk_app_start is called in Round 0. 00:06:27.187 Shutdown signal received, stop current app iteration 00:06:27.187 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:27.187 spdk_app_start is called in Round 1. 00:06:27.187 Shutdown signal received, stop current app iteration 00:06:27.187 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:27.187 spdk_app_start is called in Round 2. 00:06:27.187 Shutdown signal received, stop current app iteration 00:06:27.187 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:27.187 spdk_app_start is called in Round 3. 00:06:27.187 Shutdown signal received, stop current app iteration 00:06:27.187 ************************************ 00:06:27.187 END TEST app_repeat 00:06:27.187 ************************************ 00:06:27.187 23:55:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:27.187 23:55:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:27.187 00:06:27.187 real 0m16.889s 00:06:27.187 user 0m37.723s 00:06:27.187 sys 0m2.075s 00:06:27.187 23:55:17 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.187 23:55:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.187 23:55:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:27.187 23:55:17 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:27.187 23:55:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.187 23:55:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.188 23:55:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.188 ************************************ 00:06:27.188 START TEST cpu_locks 00:06:27.188 ************************************ 00:06:27.188 23:55:17 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:27.188 * Looking for test storage... 00:06:27.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:27.188 23:55:17 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:27.188 23:55:17 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:27.188 23:55:17 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:27.188 23:55:17 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:27.188 23:55:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.451 23:55:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:27.451 23:55:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:27.451 23:55:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.451 23:55:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:27.451 23:55:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.451 23:55:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.451 23:55:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.451 23:55:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:27.451 23:55:17 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.451 23:55:17 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:27.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.451 --rc genhtml_branch_coverage=1 00:06:27.451 --rc genhtml_function_coverage=1 00:06:27.451 --rc genhtml_legend=1 00:06:27.451 --rc geninfo_all_blocks=1 00:06:27.451 --rc geninfo_unexecuted_blocks=1 00:06:27.451 00:06:27.451 ' 00:06:27.451 23:55:17 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:27.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.451 --rc genhtml_branch_coverage=1 00:06:27.451 --rc genhtml_function_coverage=1 00:06:27.451 --rc genhtml_legend=1 00:06:27.451 --rc geninfo_all_blocks=1 00:06:27.451 --rc geninfo_unexecuted_blocks=1 00:06:27.451 00:06:27.451 ' 00:06:27.451 23:55:17 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:27.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.452 --rc genhtml_branch_coverage=1 00:06:27.452 --rc genhtml_function_coverage=1 00:06:27.452 --rc genhtml_legend=1 00:06:27.452 --rc geninfo_all_blocks=1 00:06:27.452 --rc geninfo_unexecuted_blocks=1 00:06:27.452 00:06:27.452 ' 00:06:27.452 23:55:17 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:27.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.452 --rc genhtml_branch_coverage=1 00:06:27.452 --rc genhtml_function_coverage=1 00:06:27.452 --rc genhtml_legend=1 00:06:27.452 --rc geninfo_all_blocks=1 00:06:27.452 --rc geninfo_unexecuted_blocks=1 00:06:27.452 00:06:27.452 ' 00:06:27.452 23:55:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:27.452 23:55:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:27.452 23:55:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:27.452 23:55:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:27.452 23:55:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.452 23:55:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.452 23:55:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.452 ************************************ 00:06:27.452 START TEST default_locks 00:06:27.452 ************************************ 00:06:27.452 23:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:27.452 23:55:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71058 00:06:27.452 23:55:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.452 23:55:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71058 00:06:27.452 23:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71058 ']' 00:06:27.452 23:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.452 23:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.452 23:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.452 23:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.452 23:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.452 [2024-11-20 23:55:17.681572] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:27.452 [2024-11-20 23:55:17.681844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71058 ] 00:06:27.452 [2024-11-20 23:55:17.807154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.452 [2024-11-20 23:55:17.837048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71058 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71058 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71058 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71058 ']' 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71058 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71058 00:06:28.387 killing process with pid 71058 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71058' 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71058 00:06:28.387 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71058 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71058 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71058 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:28.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.646 ERROR: process (pid: 71058) is no longer running 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71058 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71058 ']' 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.646 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71058) - No such process 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.646 00:06:28.646 real 0m1.378s 00:06:28.646 user 0m1.467s 00:06:28.646 sys 0m0.358s 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.646 23:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.646 ************************************ 00:06:28.646 END TEST default_locks 00:06:28.646 ************************************ 00:06:28.646 23:55:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:28.646 23:55:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.646 23:55:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.646 23:55:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.646 ************************************ 00:06:28.646 START TEST default_locks_via_rpc 00:06:28.646 ************************************ 00:06:28.646 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:28.646 23:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71100 00:06:28.646 23:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71100 00:06:28.646 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71100 ']' 00:06:28.646 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.646 23:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.646 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.646 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.646 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.646 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.904 [2024-11-20 23:55:19.121819] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:28.904 [2024-11-20 23:55:19.122059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71100 ] 00:06:28.904 [2024-11-20 23:55:19.257516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.904 [2024-11-20 23:55:19.292292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71100 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71100 00:06:29.842 23:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.842 23:55:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71100 00:06:29.842 23:55:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71100 ']' 00:06:29.842 23:55:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71100 00:06:29.842 23:55:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:29.842 23:55:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.842 23:55:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71100 00:06:29.842 killing process with pid 71100 00:06:29.842 23:55:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.842 23:55:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.842 23:55:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71100' 00:06:29.842 23:55:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71100 00:06:29.842 23:55:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71100 00:06:30.100 00:06:30.100 real 0m1.412s 00:06:30.100 user 0m1.467s 00:06:30.100 sys 0m0.404s 00:06:30.100 23:55:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.100 23:55:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.100 ************************************ 00:06:30.100 END TEST default_locks_via_rpc 00:06:30.100 ************************************ 00:06:30.100 23:55:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:30.100 23:55:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.100 23:55:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.100 23:55:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.100 ************************************ 00:06:30.100 START TEST non_locking_app_on_locked_coremask 00:06:30.100 ************************************ 00:06:30.100 23:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:30.100 23:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71146 00:06:30.100 23:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71146 /var/tmp/spdk.sock 00:06:30.100 23:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71146 ']' 00:06:30.100 23:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.100 23:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.101 23:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.101 23:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.101 23:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.101 23:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.362 [2024-11-20 23:55:20.566619] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:30.362 [2024-11-20 23:55:20.566881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71146 ] 00:06:30.362 [2024-11-20 23:55:20.694155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.362 [2024-11-20 23:55:20.725314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.306 23:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.306 23:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:31.306 23:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71157 00:06:31.306 23:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71157 /var/tmp/spdk2.sock 00:06:31.306 23:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71157 ']' 00:06:31.306 23:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.306 23:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:31.306 23:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.306 23:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.306 23:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.306 23:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.306 [2024-11-20 23:55:21.432845] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:31.307 [2024-11-20 23:55:21.433508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71157 ] 00:06:31.307 [2024-11-20 23:55:21.576028] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.307 [2024-11-20 23:55:21.576112] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.307 [2024-11-20 23:55:21.681671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.876 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.876 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:31.876 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71146 00:06:31.876 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71146 00:06:31.876 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.443 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71146 00:06:32.443 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71146 ']' 00:06:32.443 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71146 00:06:32.443 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:32.443 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.443 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71146 00:06:32.443 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.443 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.443 killing process with pid 71146 00:06:32.443 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71146' 00:06:32.443 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71146 00:06:32.443 23:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71146 00:06:32.701 23:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71157 00:06:32.701 23:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71157 ']' 00:06:32.701 23:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71157 00:06:32.701 23:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:32.701 23:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.701 23:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71157 00:06:32.701 killing process with pid 71157 00:06:32.701 23:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.701 23:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.701 23:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71157' 00:06:32.701 23:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71157 00:06:32.701 23:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71157 00:06:32.959 00:06:32.959 real 0m2.853s 00:06:32.959 user 0m3.042s 00:06:32.959 sys 0m0.840s 00:06:32.959 23:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.959 ************************************ 00:06:32.959 23:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.959 END TEST non_locking_app_on_locked_coremask 00:06:32.959 ************************************ 00:06:33.217 23:55:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:33.217 23:55:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.217 23:55:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.217 23:55:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.217 ************************************ 00:06:33.217 START TEST locking_app_on_unlocked_coremask 00:06:33.217 ************************************ 00:06:33.217 23:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:33.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.217 23:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71215 00:06:33.217 23:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71215 /var/tmp/spdk.sock 00:06:33.217 23:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71215 ']' 00:06:33.217 23:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.217 23:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:33.217 23:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.217 23:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.217 23:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.217 23:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.218 [2024-11-20 23:55:23.469771] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:33.218 [2024-11-20 23:55:23.469895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71215 ] 00:06:33.218 [2024-11-20 23:55:23.602465] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.218 [2024-11-20 23:55:23.602615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.218 [2024-11-20 23:55:23.631464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.151 23:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.151 23:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:34.151 23:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:34.151 23:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71231 00:06:34.152 23:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71231 /var/tmp/spdk2.sock 00:06:34.152 23:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71231 ']' 00:06:34.152 23:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.152 23:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.152 23:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.152 23:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.152 23:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.152 [2024-11-20 23:55:24.266654] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:34.152 [2024-11-20 23:55:24.266898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71231 ] 00:06:34.152 [2024-11-20 23:55:24.396193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.152 [2024-11-20 23:55:24.453343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.720 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.720 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:34.720 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71231 00:06:34.720 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71231 00:06:34.720 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.981 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71215 00:06:34.981 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71215 ']' 00:06:34.981 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71215 00:06:35.239 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:35.239 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.239 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71215 00:06:35.239 killing process with pid 71215 00:06:35.239 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.240 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.240 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71215' 00:06:35.240 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71215 00:06:35.240 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71215 00:06:35.497 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71231 00:06:35.497 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71231 ']' 00:06:35.497 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71231 00:06:35.497 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:35.497 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.497 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71231 00:06:35.829 killing process with pid 71231 00:06:35.829 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.829 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.829 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71231' 00:06:35.829 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71231 00:06:35.829 23:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71231 00:06:35.829 00:06:35.829 real 0m2.769s 00:06:35.829 user 0m2.962s 00:06:35.829 sys 0m0.713s 00:06:35.829 23:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.829 ************************************ 00:06:35.829 END TEST locking_app_on_unlocked_coremask 00:06:35.829 ************************************ 00:06:35.829 23:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.829 23:55:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.829 23:55:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.829 23:55:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.829 23:55:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.829 ************************************ 00:06:35.829 START TEST locking_app_on_locked_coremask 00:06:35.829 ************************************ 00:06:35.829 23:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:35.829 23:55:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71289 00:06:35.829 23:55:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71289 /var/tmp/spdk.sock 00:06:35.829 23:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71289 ']' 00:06:35.829 23:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.829 23:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.829 23:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.829 23:55:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.829 23:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.829 23:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.106 [2024-11-20 23:55:26.275491] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:36.106 [2024-11-20 23:55:26.275586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71289 ] 00:06:36.106 [2024-11-20 23:55:26.405163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.106 [2024-11-20 23:55:26.435986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71305 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71305 /var/tmp/spdk2.sock 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71305 /var/tmp/spdk2.sock 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:36.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71305 /var/tmp/spdk2.sock 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71305 ']' 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.670 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.928 [2024-11-20 23:55:27.144461] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:36.928 [2024-11-20 23:55:27.144952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71305 ] 00:06:36.928 [2024-11-20 23:55:27.273853] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71289 has claimed it. 00:06:36.928 [2024-11-20 23:55:27.273897] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.495 ERROR: process (pid: 71305) is no longer running 00:06:37.495 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71305) - No such process 00:06:37.495 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.495 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:37.495 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:37.495 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.495 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.495 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.495 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71289 00:06:37.495 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71289 00:06:37.495 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.753 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71289 00:06:37.753 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71289 ']' 00:06:37.753 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71289 00:06:37.753 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:37.753 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.753 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71289 00:06:37.753 killing process with pid 71289 00:06:37.753 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.753 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.753 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71289' 00:06:37.753 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71289 00:06:37.753 23:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71289 00:06:38.012 00:06:38.012 real 0m2.028s 00:06:38.012 user 0m2.230s 00:06:38.012 sys 0m0.489s 00:06:38.012 23:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.012 23:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.012 ************************************ 00:06:38.012 END TEST locking_app_on_locked_coremask 00:06:38.012 ************************************ 00:06:38.012 23:55:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:38.012 23:55:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.012 23:55:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.012 23:55:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.012 ************************************ 00:06:38.012 START TEST locking_overlapped_coremask 00:06:38.012 ************************************ 00:06:38.012 23:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:38.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.012 23:55:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71347 00:06:38.012 23:55:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71347 /var/tmp/spdk.sock 00:06:38.012 23:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71347 ']' 00:06:38.012 23:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.012 23:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.012 23:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.012 23:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.012 23:55:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:38.012 23:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.012 [2024-11-20 23:55:28.342258] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:38.012 [2024-11-20 23:55:28.342367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71347 ] 00:06:38.270 [2024-11-20 23:55:28.475077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.270 [2024-11-20 23:55:28.507641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.270 [2024-11-20 23:55:28.507724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.270 [2024-11-20 23:55:28.507775] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71365 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71365 /var/tmp/spdk2.sock 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71365 /var/tmp/spdk2.sock 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:38.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71365 /var/tmp/spdk2.sock 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71365 ']' 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.836 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.836 [2024-11-20 23:55:29.251584] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:38.836 [2024-11-20 23:55:29.252029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71365 ] 00:06:39.094 [2024-11-20 23:55:29.383904] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71347 has claimed it. 00:06:39.094 [2024-11-20 23:55:29.383952] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.660 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71365) - No such process 00:06:39.660 ERROR: process (pid: 71365) is no longer running 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71347 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71347 ']' 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71347 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71347 00:06:39.660 killing process with pid 71347 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71347' 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71347 00:06:39.660 23:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71347 00:06:39.918 00:06:39.918 real 0m1.879s 00:06:39.918 user 0m5.240s 00:06:39.918 sys 0m0.359s 00:06:39.918 23:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.918 ************************************ 00:06:39.918 END TEST locking_overlapped_coremask 00:06:39.918 ************************************ 00:06:39.918 23:55:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.918 23:55:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:39.918 23:55:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.918 23:55:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.918 23:55:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.918 ************************************ 00:06:39.918 START TEST locking_overlapped_coremask_via_rpc 00:06:39.918 ************************************ 00:06:39.918 23:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:39.918 23:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71407 00:06:39.918 23:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71407 /var/tmp/spdk.sock 00:06:39.918 23:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71407 ']' 00:06:39.918 23:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.918 23:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.918 23:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.918 23:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.918 23:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.918 23:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:39.918 [2024-11-20 23:55:30.291857] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:39.918 [2024-11-20 23:55:30.292118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71407 ] 00:06:40.177 [2024-11-20 23:55:30.427311] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.177 [2024-11-20 23:55:30.427347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.177 [2024-11-20 23:55:30.458071] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.177 [2024-11-20 23:55:30.458365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.177 [2024-11-20 23:55:30.458428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.745 23:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.745 23:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:40.745 23:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71425 00:06:40.745 23:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71425 /var/tmp/spdk2.sock 00:06:40.745 23:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71425 ']' 00:06:40.745 23:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:40.745 23:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.745 23:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.745 23:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.745 23:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.745 23:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.004 [2024-11-20 23:55:31.190464] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:41.004 [2024-11-20 23:55:31.190740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71425 ] 00:06:41.004 [2024-11-20 23:55:31.331125] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.004 [2024-11-20 23:55:31.331184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.004 [2024-11-20 23:55:31.395465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.004 [2024-11-20 23:55:31.399489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.004 [2024-11-20 23:55:31.399550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.939 [2024-11-20 23:55:32.056443] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71407 has claimed it. 00:06:41.939 request: 00:06:41.939 { 00:06:41.939 "method": "framework_enable_cpumask_locks", 00:06:41.939 "req_id": 1 00:06:41.939 } 00:06:41.939 Got JSON-RPC error response 00:06:41.939 response: 00:06:41.939 { 00:06:41.939 "code": -32603, 00:06:41.939 "message": "Failed to claim CPU core: 2" 00:06:41.939 } 00:06:41.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71407 /var/tmp/spdk.sock 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71407 ']' 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71425 /var/tmp/spdk2.sock 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71425 ']' 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.939 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.197 ************************************ 00:06:42.197 END TEST locking_overlapped_coremask_via_rpc 00:06:42.197 ************************************ 00:06:42.197 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.197 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:42.197 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:42.197 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.197 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.197 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.197 00:06:42.197 real 0m2.250s 00:06:42.197 user 0m1.059s 00:06:42.197 sys 0m0.120s 00:06:42.197 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.197 23:55:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.197 23:55:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:42.197 23:55:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71407 ]] 00:06:42.197 23:55:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71407 00:06:42.197 23:55:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71407 ']' 00:06:42.197 23:55:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71407 00:06:42.197 23:55:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:42.197 23:55:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.197 23:55:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71407 00:06:42.197 killing process with pid 71407 00:06:42.197 23:55:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.197 23:55:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.197 23:55:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71407' 00:06:42.197 23:55:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71407 00:06:42.197 23:55:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71407 00:06:42.455 23:55:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71425 ]] 00:06:42.455 23:55:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71425 00:06:42.455 23:55:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71425 ']' 00:06:42.455 23:55:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71425 00:06:42.455 23:55:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:42.455 23:55:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.455 23:55:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71425 00:06:42.455 killing process with pid 71425 00:06:42.455 23:55:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:42.455 23:55:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:42.455 23:55:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71425' 00:06:42.455 23:55:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71425 00:06:42.455 23:55:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71425 00:06:42.713 23:55:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.713 Process with pid 71407 is not found 00:06:42.713 23:55:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:42.713 23:55:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71407 ]] 00:06:42.713 23:55:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71407 00:06:42.713 23:55:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71407 ']' 00:06:42.713 23:55:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71407 00:06:42.713 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71407) - No such process 00:06:42.713 23:55:33 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71407 is not found' 00:06:42.713 23:55:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71425 ]] 00:06:42.713 23:55:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71425 00:06:42.713 23:55:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71425 ']' 00:06:42.713 Process with pid 71425 is not found 00:06:42.713 23:55:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71425 00:06:42.713 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71425) - No such process 00:06:42.713 23:55:33 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71425 is not found' 00:06:42.713 23:55:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.713 00:06:42.713 real 0m15.581s 00:06:42.713 user 0m27.655s 00:06:42.713 sys 0m3.969s 00:06:42.713 23:55:33 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.713 ************************************ 00:06:42.713 END TEST cpu_locks 00:06:42.713 ************************************ 00:06:42.713 23:55:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.713 ************************************ 00:06:42.713 END TEST event 00:06:42.713 ************************************ 00:06:42.713 00:06:42.713 real 0m40.974s 00:06:42.713 user 1m19.813s 00:06:42.713 sys 0m6.780s 00:06:42.713 23:55:33 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.713 23:55:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.713 23:55:33 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:42.713 23:55:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.713 23:55:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.713 23:55:33 -- common/autotest_common.sh@10 -- # set +x 00:06:42.713 ************************************ 00:06:42.713 START TEST thread 00:06:42.713 ************************************ 00:06:42.713 23:55:33 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:42.971 * Looking for test storage... 00:06:42.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:42.971 23:55:33 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:42.971 23:55:33 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:42.971 23:55:33 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:42.971 23:55:33 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:42.972 23:55:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.972 23:55:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.972 23:55:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.972 23:55:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.972 23:55:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.972 23:55:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.972 23:55:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.972 23:55:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.972 23:55:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.972 23:55:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.972 23:55:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.972 23:55:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:42.972 23:55:33 thread -- scripts/common.sh@345 -- # : 1 00:06:42.972 23:55:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.972 23:55:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.972 23:55:33 thread -- scripts/common.sh@365 -- # decimal 1 00:06:42.972 23:55:33 thread -- scripts/common.sh@353 -- # local d=1 00:06:42.972 23:55:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.972 23:55:33 thread -- scripts/common.sh@355 -- # echo 1 00:06:42.972 23:55:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.972 23:55:33 thread -- scripts/common.sh@366 -- # decimal 2 00:06:42.972 23:55:33 thread -- scripts/common.sh@353 -- # local d=2 00:06:42.972 23:55:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.972 23:55:33 thread -- scripts/common.sh@355 -- # echo 2 00:06:42.972 23:55:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.972 23:55:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.972 23:55:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.972 23:55:33 thread -- scripts/common.sh@368 -- # return 0 00:06:42.972 23:55:33 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.972 23:55:33 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:42.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.972 --rc genhtml_branch_coverage=1 00:06:42.972 --rc genhtml_function_coverage=1 00:06:42.972 --rc genhtml_legend=1 00:06:42.972 --rc geninfo_all_blocks=1 00:06:42.972 --rc geninfo_unexecuted_blocks=1 00:06:42.972 00:06:42.972 ' 00:06:42.972 23:55:33 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:42.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.972 --rc genhtml_branch_coverage=1 00:06:42.972 --rc genhtml_function_coverage=1 00:06:42.972 --rc genhtml_legend=1 00:06:42.972 --rc geninfo_all_blocks=1 00:06:42.972 --rc geninfo_unexecuted_blocks=1 00:06:42.972 00:06:42.972 ' 00:06:42.972 23:55:33 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:42.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.972 --rc genhtml_branch_coverage=1 00:06:42.972 --rc genhtml_function_coverage=1 00:06:42.972 --rc genhtml_legend=1 00:06:42.972 --rc geninfo_all_blocks=1 00:06:42.972 --rc geninfo_unexecuted_blocks=1 00:06:42.972 00:06:42.972 ' 00:06:42.972 23:55:33 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:42.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.972 --rc genhtml_branch_coverage=1 00:06:42.972 --rc genhtml_function_coverage=1 00:06:42.972 --rc genhtml_legend=1 00:06:42.972 --rc geninfo_all_blocks=1 00:06:42.972 --rc geninfo_unexecuted_blocks=1 00:06:42.972 00:06:42.972 ' 00:06:42.972 23:55:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.972 23:55:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:42.972 23:55:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.972 23:55:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.972 ************************************ 00:06:42.972 START TEST thread_poller_perf 00:06:42.972 ************************************ 00:06:42.972 23:55:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.972 [2024-11-20 23:55:33.295524] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:42.972 [2024-11-20 23:55:33.295634] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71552 ] 00:06:43.230 [2024-11-20 23:55:33.431150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.230 [2024-11-20 23:55:33.461761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.230 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:44.164 [2024-11-20T23:55:34.585Z] ====================================== 00:06:44.164 [2024-11-20T23:55:34.585Z] busy:2610823108 (cyc) 00:06:44.164 [2024-11-20T23:55:34.585Z] total_run_count: 408000 00:06:44.164 [2024-11-20T23:55:34.585Z] tsc_hz: 2600000000 (cyc) 00:06:44.164 [2024-11-20T23:55:34.585Z] ====================================== 00:06:44.164 [2024-11-20T23:55:34.585Z] poller_cost: 6399 (cyc), 2461 (nsec) 00:06:44.164 00:06:44.164 real 0m1.253s 00:06:44.164 user 0m1.088s 00:06:44.164 sys 0m0.060s 00:06:44.164 23:55:34 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.164 23:55:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.164 ************************************ 00:06:44.164 END TEST thread_poller_perf 00:06:44.164 ************************************ 00:06:44.164 23:55:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:44.164 23:55:34 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:44.164 23:55:34 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.164 23:55:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.164 ************************************ 00:06:44.164 START TEST thread_poller_perf 00:06:44.164 ************************************ 00:06:44.164 23:55:34 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:44.421 [2024-11-20 23:55:34.591476] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:44.421 [2024-11-20 23:55:34.591683] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71583 ] 00:06:44.421 [2024-11-20 23:55:34.723730] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.421 [2024-11-20 23:55:34.754183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.421 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:45.795 [2024-11-20T23:55:36.216Z] ====================================== 00:06:45.795 [2024-11-20T23:55:36.216Z] busy:2602892326 (cyc) 00:06:45.795 [2024-11-20T23:55:36.216Z] total_run_count: 5222000 00:06:45.795 [2024-11-20T23:55:36.216Z] tsc_hz: 2600000000 (cyc) 00:06:45.795 [2024-11-20T23:55:36.216Z] ====================================== 00:06:45.795 [2024-11-20T23:55:36.216Z] poller_cost: 498 (cyc), 191 (nsec) 00:06:45.795 00:06:45.795 real 0m1.242s 00:06:45.795 user 0m1.079s 00:06:45.795 sys 0m0.058s 00:06:45.795 23:55:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.795 23:55:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.795 ************************************ 00:06:45.795 END TEST thread_poller_perf 00:06:45.795 ************************************ 00:06:45.795 23:55:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:45.795 00:06:45.795 real 0m2.717s 00:06:45.795 user 0m2.262s 00:06:45.795 sys 0m0.244s 00:06:45.795 ************************************ 00:06:45.795 END TEST thread 00:06:45.795 ************************************ 00:06:45.795 23:55:35 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.795 23:55:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.795 23:55:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:45.795 23:55:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:45.795 23:55:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.795 23:55:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.795 23:55:35 -- common/autotest_common.sh@10 -- # set +x 00:06:45.795 ************************************ 00:06:45.795 START TEST app_cmdline 00:06:45.795 ************************************ 00:06:45.795 23:55:35 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:45.795 * Looking for test storage... 00:06:45.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:45.795 23:55:35 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:45.795 23:55:35 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:45.795 23:55:35 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:45.795 23:55:35 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:45.795 23:55:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:45.795 23:55:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.795 23:55:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:45.795 23:55:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.795 23:55:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:45.795 23:55:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:45.795 23:55:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.795 23:55:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:45.796 23:55:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.796 23:55:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.796 23:55:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.796 23:55:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:45.796 23:55:36 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.796 23:55:36 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:45.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.796 --rc genhtml_branch_coverage=1 00:06:45.796 --rc genhtml_function_coverage=1 00:06:45.796 --rc genhtml_legend=1 00:06:45.796 --rc geninfo_all_blocks=1 00:06:45.796 --rc geninfo_unexecuted_blocks=1 00:06:45.796 00:06:45.796 ' 00:06:45.796 23:55:36 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:45.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.796 --rc genhtml_branch_coverage=1 00:06:45.796 --rc genhtml_function_coverage=1 00:06:45.796 --rc genhtml_legend=1 00:06:45.796 --rc geninfo_all_blocks=1 00:06:45.796 --rc geninfo_unexecuted_blocks=1 00:06:45.796 00:06:45.796 ' 00:06:45.796 23:55:36 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:45.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.796 --rc genhtml_branch_coverage=1 00:06:45.796 --rc genhtml_function_coverage=1 00:06:45.796 --rc genhtml_legend=1 00:06:45.796 --rc geninfo_all_blocks=1 00:06:45.796 --rc geninfo_unexecuted_blocks=1 00:06:45.796 00:06:45.796 ' 00:06:45.796 23:55:36 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:45.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.796 --rc genhtml_branch_coverage=1 00:06:45.796 --rc genhtml_function_coverage=1 00:06:45.796 --rc genhtml_legend=1 00:06:45.796 --rc geninfo_all_blocks=1 00:06:45.796 --rc geninfo_unexecuted_blocks=1 00:06:45.796 00:06:45.796 ' 00:06:45.796 23:55:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:45.796 23:55:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71672 00:06:45.796 23:55:36 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:45.796 23:55:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71672 00:06:45.796 23:55:36 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71672 ']' 00:06:45.796 23:55:36 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.796 23:55:36 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.796 23:55:36 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.796 23:55:36 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.796 23:55:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.796 [2024-11-20 23:55:36.081879] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:45.796 [2024-11-20 23:55:36.082135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71672 ] 00:06:46.053 [2024-11-20 23:55:36.217473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.053 [2024-11-20 23:55:36.250127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.648 23:55:36 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.648 23:55:36 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:46.648 23:55:36 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:46.906 { 00:06:46.906 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:46.906 "fields": { 00:06:46.906 "major": 24, 00:06:46.906 "minor": 9, 00:06:46.906 "patch": 1, 00:06:46.906 "suffix": "-pre", 00:06:46.906 "commit": "b18e1bd62" 00:06:46.906 } 00:06:46.906 } 00:06:46.906 23:55:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:46.906 23:55:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:46.906 23:55:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:46.906 23:55:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:46.906 23:55:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:46.906 23:55:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.906 23:55:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.906 23:55:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:46.906 23:55:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:46.906 23:55:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:46.906 23:55:37 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.164 request: 00:06:47.164 { 00:06:47.164 "method": "env_dpdk_get_mem_stats", 00:06:47.164 "req_id": 1 00:06:47.164 } 00:06:47.164 Got JSON-RPC error response 00:06:47.164 response: 00:06:47.164 { 00:06:47.164 "code": -32601, 00:06:47.164 "message": "Method not found" 00:06:47.164 } 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.164 23:55:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71672 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71672 ']' 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71672 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71672 00:06:47.164 killing process with pid 71672 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71672' 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@969 -- # kill 71672 00:06:47.164 23:55:37 app_cmdline -- common/autotest_common.sh@974 -- # wait 71672 00:06:47.422 ************************************ 00:06:47.422 END TEST app_cmdline 00:06:47.422 ************************************ 00:06:47.422 00:06:47.422 real 0m1.795s 00:06:47.422 user 0m2.244s 00:06:47.422 sys 0m0.349s 00:06:47.422 23:55:37 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.422 23:55:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.422 23:55:37 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:47.422 23:55:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.422 23:55:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.422 23:55:37 -- common/autotest_common.sh@10 -- # set +x 00:06:47.422 ************************************ 00:06:47.422 START TEST version 00:06:47.422 ************************************ 00:06:47.422 23:55:37 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:47.422 * Looking for test storage... 00:06:47.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:47.422 23:55:37 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:47.422 23:55:37 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:47.422 23:55:37 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:47.680 23:55:37 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:47.680 23:55:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.680 23:55:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.680 23:55:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.680 23:55:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.680 23:55:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.680 23:55:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.680 23:55:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.680 23:55:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.680 23:55:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.680 23:55:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.680 23:55:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.680 23:55:37 version -- scripts/common.sh@344 -- # case "$op" in 00:06:47.680 23:55:37 version -- scripts/common.sh@345 -- # : 1 00:06:47.680 23:55:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.680 23:55:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.680 23:55:37 version -- scripts/common.sh@365 -- # decimal 1 00:06:47.680 23:55:37 version -- scripts/common.sh@353 -- # local d=1 00:06:47.680 23:55:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.680 23:55:37 version -- scripts/common.sh@355 -- # echo 1 00:06:47.680 23:55:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.680 23:55:37 version -- scripts/common.sh@366 -- # decimal 2 00:06:47.680 23:55:37 version -- scripts/common.sh@353 -- # local d=2 00:06:47.680 23:55:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.680 23:55:37 version -- scripts/common.sh@355 -- # echo 2 00:06:47.680 23:55:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.680 23:55:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.680 23:55:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.680 23:55:37 version -- scripts/common.sh@368 -- # return 0 00:06:47.680 23:55:37 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.680 23:55:37 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:47.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.680 --rc genhtml_branch_coverage=1 00:06:47.680 --rc genhtml_function_coverage=1 00:06:47.680 --rc genhtml_legend=1 00:06:47.680 --rc geninfo_all_blocks=1 00:06:47.680 --rc geninfo_unexecuted_blocks=1 00:06:47.680 00:06:47.680 ' 00:06:47.680 23:55:37 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:47.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.680 --rc genhtml_branch_coverage=1 00:06:47.680 --rc genhtml_function_coverage=1 00:06:47.680 --rc genhtml_legend=1 00:06:47.680 --rc geninfo_all_blocks=1 00:06:47.680 --rc geninfo_unexecuted_blocks=1 00:06:47.680 00:06:47.680 ' 00:06:47.680 23:55:37 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:47.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.680 --rc genhtml_branch_coverage=1 00:06:47.680 --rc genhtml_function_coverage=1 00:06:47.680 --rc genhtml_legend=1 00:06:47.680 --rc geninfo_all_blocks=1 00:06:47.680 --rc geninfo_unexecuted_blocks=1 00:06:47.680 00:06:47.680 ' 00:06:47.680 23:55:37 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:47.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.680 --rc genhtml_branch_coverage=1 00:06:47.680 --rc genhtml_function_coverage=1 00:06:47.680 --rc genhtml_legend=1 00:06:47.680 --rc geninfo_all_blocks=1 00:06:47.680 --rc geninfo_unexecuted_blocks=1 00:06:47.680 00:06:47.680 ' 00:06:47.680 23:55:37 version -- app/version.sh@17 -- # get_header_version major 00:06:47.680 23:55:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:47.680 23:55:37 version -- app/version.sh@14 -- # cut -f2 00:06:47.680 23:55:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.680 23:55:37 version -- app/version.sh@17 -- # major=24 00:06:47.680 23:55:37 version -- app/version.sh@18 -- # get_header_version minor 00:06:47.680 23:55:37 version -- app/version.sh@14 -- # cut -f2 00:06:47.680 23:55:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.680 23:55:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:47.680 23:55:37 version -- app/version.sh@18 -- # minor=9 00:06:47.680 23:55:37 version -- app/version.sh@19 -- # get_header_version patch 00:06:47.680 23:55:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:47.680 23:55:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.680 23:55:37 version -- app/version.sh@14 -- # cut -f2 00:06:47.680 23:55:37 version -- app/version.sh@19 -- # patch=1 00:06:47.680 23:55:37 version -- app/version.sh@20 -- # get_header_version suffix 00:06:47.680 23:55:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:47.680 23:55:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.680 23:55:37 version -- app/version.sh@14 -- # cut -f2 00:06:47.680 23:55:37 version -- app/version.sh@20 -- # suffix=-pre 00:06:47.680 23:55:37 version -- app/version.sh@22 -- # version=24.9 00:06:47.680 23:55:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:47.680 23:55:37 version -- app/version.sh@25 -- # version=24.9.1 00:06:47.680 23:55:37 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:47.681 23:55:37 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:47.681 23:55:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:47.681 23:55:37 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:47.681 23:55:37 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:47.681 ************************************ 00:06:47.681 END TEST version 00:06:47.681 ************************************ 00:06:47.681 00:06:47.681 real 0m0.181s 00:06:47.681 user 0m0.108s 00:06:47.681 sys 0m0.095s 00:06:47.681 23:55:37 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.681 23:55:37 version -- common/autotest_common.sh@10 -- # set +x 00:06:47.681 23:55:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:47.681 23:55:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:47.681 23:55:37 -- spdk/autotest.sh@194 -- # uname -s 00:06:47.681 23:55:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:47.681 23:55:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:47.681 23:55:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:47.681 23:55:37 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:47.681 23:55:37 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:47.681 23:55:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:47.681 23:55:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.681 23:55:37 -- common/autotest_common.sh@10 -- # set +x 00:06:47.681 ************************************ 00:06:47.681 START TEST blockdev_nvme 00:06:47.681 ************************************ 00:06:47.681 23:55:37 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:47.681 * Looking for test storage... 00:06:47.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:47.681 23:55:38 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:47.681 23:55:38 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:47.681 23:55:38 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:47.681 23:55:38 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.681 23:55:38 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:47.681 23:55:38 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.681 23:55:38 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:47.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.681 --rc genhtml_branch_coverage=1 00:06:47.681 --rc genhtml_function_coverage=1 00:06:47.681 --rc genhtml_legend=1 00:06:47.681 --rc geninfo_all_blocks=1 00:06:47.681 --rc geninfo_unexecuted_blocks=1 00:06:47.681 00:06:47.681 ' 00:06:47.681 23:55:38 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:47.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.681 --rc genhtml_branch_coverage=1 00:06:47.681 --rc genhtml_function_coverage=1 00:06:47.681 --rc genhtml_legend=1 00:06:47.681 --rc geninfo_all_blocks=1 00:06:47.681 --rc geninfo_unexecuted_blocks=1 00:06:47.681 00:06:47.681 ' 00:06:47.681 23:55:38 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:47.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.681 --rc genhtml_branch_coverage=1 00:06:47.681 --rc genhtml_function_coverage=1 00:06:47.681 --rc genhtml_legend=1 00:06:47.681 --rc geninfo_all_blocks=1 00:06:47.681 --rc geninfo_unexecuted_blocks=1 00:06:47.681 00:06:47.681 ' 00:06:47.681 23:55:38 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:47.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.681 --rc genhtml_branch_coverage=1 00:06:47.681 --rc genhtml_function_coverage=1 00:06:47.681 --rc genhtml_legend=1 00:06:47.681 --rc geninfo_all_blocks=1 00:06:47.681 --rc geninfo_unexecuted_blocks=1 00:06:47.681 00:06:47.681 ' 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:47.940 23:55:38 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:47.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71833 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71833 00:06:47.940 23:55:38 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 71833 ']' 00:06:47.940 23:55:38 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.940 23:55:38 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.940 23:55:38 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.940 23:55:38 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:47.940 23:55:38 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.940 23:55:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:47.940 [2024-11-20 23:55:38.175741] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:47.940 [2024-11-20 23:55:38.175863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71833 ] 00:06:47.940 [2024-11-20 23:55:38.310556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.940 [2024-11-20 23:55:38.343617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.884 23:55:39 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.884 23:55:39 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:48.884 23:55:39 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:48.884 23:55:39 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:48.884 23:55:39 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:48.884 23:55:39 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:48.884 23:55:39 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:48.884 23:55:39 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:48.884 23:55:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.884 23:55:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.142 23:55:39 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.142 23:55:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:49.142 23:55:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.142 23:55:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.142 23:55:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.142 23:55:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:49.142 23:55:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.142 23:55:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:49.142 23:55:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.142 23:55:39 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:49.142 23:55:39 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:49.143 23:55:39 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4a2da371-af41-4092-b34b-3cb0db41ce30"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4a2da371-af41-4092-b34b-3cb0db41ce30",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "72e67e05-c5b1-46a0-903f-08d59aee49ec"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "72e67e05-c5b1-46a0-903f-08d59aee49ec",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "21e8e04f-c5de-4f2e-aa7e-84ff4c6fb6ce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "21e8e04f-c5de-4f2e-aa7e-84ff4c6fb6ce",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "226857bd-7274-4a9e-afd1-233dd342efa2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "226857bd-7274-4a9e-afd1-233dd342efa2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fcc0d839-3f15-473a-9785-8c3afa091aa5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fcc0d839-3f15-473a-9785-8c3afa091aa5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "73fc2398-8028-4b31-9688-b41e7175ba41"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "73fc2398-8028-4b31-9688-b41e7175ba41",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:49.143 23:55:39 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:49.143 23:55:39 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:49.143 23:55:39 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:49.143 23:55:39 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71833 00:06:49.143 23:55:39 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 71833 ']' 00:06:49.143 23:55:39 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 71833 00:06:49.143 23:55:39 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:49.143 23:55:39 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.143 23:55:39 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71833 00:06:49.143 killing process with pid 71833 00:06:49.143 23:55:39 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.143 23:55:39 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.143 23:55:39 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71833' 00:06:49.143 23:55:39 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 71833 00:06:49.143 23:55:39 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 71833 00:06:49.401 23:55:39 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:49.401 23:55:39 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:49.401 23:55:39 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:49.401 23:55:39 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.401 23:55:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:49.401 ************************************ 00:06:49.401 START TEST bdev_hello_world 00:06:49.401 ************************************ 00:06:49.401 23:55:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:49.659 [2024-11-20 23:55:39.869959] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:49.659 [2024-11-20 23:55:39.870197] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71895 ] 00:06:49.659 [2024-11-20 23:55:40.005793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.659 [2024-11-20 23:55:40.048261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.224 [2024-11-20 23:55:40.416831] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:50.224 [2024-11-20 23:55:40.416895] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:50.224 [2024-11-20 23:55:40.416918] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:50.224 [2024-11-20 23:55:40.418948] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:50.224 [2024-11-20 23:55:40.419808] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:50.224 [2024-11-20 23:55:40.419840] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:50.224 [2024-11-20 23:55:40.420720] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:50.224 00:06:50.224 [2024-11-20 23:55:40.420753] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:50.224 ************************************ 00:06:50.224 END TEST bdev_hello_world 00:06:50.224 ************************************ 00:06:50.224 00:06:50.224 real 0m0.763s 00:06:50.224 user 0m0.518s 00:06:50.224 sys 0m0.142s 00:06:50.224 23:55:40 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.224 23:55:40 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:50.224 23:55:40 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:50.224 23:55:40 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:50.224 23:55:40 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.224 23:55:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:50.224 ************************************ 00:06:50.224 START TEST bdev_bounds 00:06:50.224 ************************************ 00:06:50.224 Process bdevio pid: 71926 00:06:50.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.224 23:55:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:50.224 23:55:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71926 00:06:50.224 23:55:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.224 23:55:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71926' 00:06:50.224 23:55:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71926 00:06:50.224 23:55:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71926 ']' 00:06:50.224 23:55:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.224 23:55:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.224 23:55:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.224 23:55:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:50.224 23:55:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.224 23:55:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:50.482 [2024-11-20 23:55:40.689528] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:50.482 [2024-11-20 23:55:40.690070] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71926 ] 00:06:50.482 [2024-11-20 23:55:40.825965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.483 [2024-11-20 23:55:40.860178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.483 [2024-11-20 23:55:40.860435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.483 [2024-11-20 23:55:40.860495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.419 23:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.419 23:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:51.419 23:55:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:51.419 I/O targets: 00:06:51.419 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:51.419 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:51.419 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:51.419 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:51.419 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:51.419 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:51.419 00:06:51.419 00:06:51.420 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.420 http://cunit.sourceforge.net/ 00:06:51.420 00:06:51.420 00:06:51.420 Suite: bdevio tests on: Nvme3n1 00:06:51.420 Test: blockdev write read block ...passed 00:06:51.420 Test: blockdev write zeroes read block ...passed 00:06:51.420 Test: blockdev write zeroes read no split ...passed 00:06:51.420 Test: blockdev write zeroes read split ...passed 00:06:51.420 Test: blockdev write zeroes read split partial ...passed 00:06:51.420 Test: blockdev reset ...[2024-11-20 23:55:41.639648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:51.420 passed 00:06:51.420 Test: blockdev write read 8 blocks ...[2024-11-20 23:55:41.641895] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:51.420 passed 00:06:51.420 Test: blockdev write read size > 128k ...passed 00:06:51.420 Test: blockdev write read invalid size ...passed 00:06:51.420 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:51.420 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:51.420 Test: blockdev write read max offset ...passed 00:06:51.420 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:51.420 Test: blockdev writev readv 8 blocks ...passed 00:06:51.420 Test: blockdev writev readv 30 x 1block ...passed 00:06:51.420 Test: blockdev writev readv block ...passed 00:06:51.420 Test: blockdev writev readv size > 128k ...passed 00:06:51.420 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:51.420 Test: blockdev comparev and writev ...[2024-11-20 23:55:41.657386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7e0a000 len:0x1000 00:06:51.420 [2024-11-20 23:55:41.657438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:51.420 passed 00:06:51.420 Test: blockdev nvme passthru rw ...passed 00:06:51.420 Test: blockdev nvme passthru vendor specific ...[2024-11-20 23:55:41.659638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:51.420 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:51.420 [2024-11-20 23:55:41.659763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:51.420 passed 00:06:51.420 Test: blockdev copy ...passed 00:06:51.420 Suite: bdevio tests on: Nvme2n3 00:06:51.420 Test: blockdev write read block ...passed 00:06:51.420 Test: blockdev write zeroes read block ...passed 00:06:51.420 Test: blockdev write zeroes read no split ...passed 00:06:51.420 Test: blockdev write zeroes read split ...passed 00:06:51.420 Test: blockdev write zeroes read split partial ...passed 00:06:51.420 Test: blockdev reset ...[2024-11-20 23:55:41.689350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:51.420 [2024-11-20 23:55:41.691426] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:51.420 passed 00:06:51.420 Test: blockdev write read 8 blocks ...passed 00:06:51.420 Test: blockdev write read size > 128k ...passed 00:06:51.420 Test: blockdev write read invalid size ...passed 00:06:51.420 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:51.420 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:51.420 Test: blockdev write read max offset ...passed 00:06:51.420 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:51.420 Test: blockdev writev readv 8 blocks ...passed 00:06:51.420 Test: blockdev writev readv 30 x 1block ...passed 00:06:51.420 Test: blockdev writev readv block ...passed 00:06:51.420 Test: blockdev writev readv size > 128k ...passed 00:06:51.420 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:51.420 Test: blockdev comparev and writev ...[2024-11-20 23:55:41.698286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7e03000 len:0x1000 00:06:51.420 [2024-11-20 23:55:41.698337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:51.420 passed 00:06:51.420 Test: blockdev nvme passthru rw ...passed 00:06:51.420 Test: blockdev nvme passthru vendor specific ...passed 00:06:51.420 Test: blockdev nvme admin passthru ...[2024-11-20 23:55:41.699178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:51.420 [2024-11-20 23:55:41.699209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:51.420 passed 00:06:51.420 Test: blockdev copy ...passed 00:06:51.420 Suite: bdevio tests on: Nvme2n2 00:06:51.420 Test: blockdev write read block ...passed 00:06:51.420 Test: blockdev write zeroes read block ...passed 00:06:51.420 Test: blockdev write zeroes read no split ...passed 00:06:51.420 Test: blockdev write zeroes read split ...passed 00:06:51.420 Test: blockdev write zeroes read split partial ...passed 00:06:51.420 Test: blockdev reset ...[2024-11-20 23:55:41.711985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:51.420 passed 00:06:51.420 Test: blockdev write read 8 blocks ...[2024-11-20 23:55:41.713783] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:51.420 passed 00:06:51.420 Test: blockdev write read size > 128k ...passed 00:06:51.420 Test: blockdev write read invalid size ...passed 00:06:51.420 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:51.420 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:51.420 Test: blockdev write read max offset ...passed 00:06:51.420 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:51.420 Test: blockdev writev readv 8 blocks ...passed 00:06:51.420 Test: blockdev writev readv 30 x 1block ...passed 00:06:51.420 Test: blockdev writev readv block ...passed 00:06:51.420 Test: blockdev writev readv size > 128k ...passed 00:06:51.420 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:51.420 Test: blockdev comparev and writev ...[2024-11-20 23:55:41.719157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7e03000 len:0x1000 00:06:51.420 [2024-11-20 23:55:41.719198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:51.420 passed 00:06:51.420 Test: blockdev nvme passthru rw ...passed 00:06:51.420 Test: blockdev nvme passthru vendor specific ...passed 00:06:51.420 Test: blockdev nvme admin passthru ...[2024-11-20 23:55:41.719761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:51.420 [2024-11-20 23:55:41.719790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:51.420 passed 00:06:51.420 Test: blockdev copy ...passed 00:06:51.420 Suite: bdevio tests on: Nvme2n1 00:06:51.420 Test: blockdev write read block ...passed 00:06:51.420 Test: blockdev write zeroes read block ...passed 00:06:51.420 Test: blockdev write zeroes read no split ...passed 00:06:51.420 Test: blockdev write zeroes read split ...passed 00:06:51.420 Test: blockdev write zeroes read split partial ...passed 00:06:51.420 Test: blockdev reset ...[2024-11-20 23:55:41.733929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:51.420 passed 00:06:51.420 Test: blockdev write read 8 blocks ...[2024-11-20 23:55:41.736588] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:51.420 passed 00:06:51.420 Test: blockdev write read size > 128k ...passed 00:06:51.420 Test: blockdev write read invalid size ...passed 00:06:51.420 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:51.420 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:51.420 Test: blockdev write read max offset ...passed 00:06:51.420 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:51.420 Test: blockdev writev readv 8 blocks ...passed 00:06:51.420 Test: blockdev writev readv 30 x 1block ...passed 00:06:51.420 Test: blockdev writev readv block ...passed 00:06:51.420 Test: blockdev writev readv size > 128k ...passed 00:06:51.420 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:51.420 Test: blockdev comparev and writev ...[2024-11-20 23:55:41.751115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7e03000 len:0x1000 00:06:51.420 [2024-11-20 23:55:41.751153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:51.420 passed 00:06:51.421 Test: blockdev nvme passthru rw ...passed 00:06:51.421 Test: blockdev nvme passthru vendor specific ...[2024-11-20 23:55:41.753349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:51.421 Test: blockdev nvme admin passthru ...passed 00:06:51.421 Test: blockdev copy ...RP2 0x0 00:06:51.421 [2024-11-20 23:55:41.753451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:51.421 passed 00:06:51.421 Suite: bdevio tests on: Nvme1n1 00:06:51.421 Test: blockdev write read block ...passed 00:06:51.421 Test: blockdev write zeroes read block ...passed 00:06:51.421 Test: blockdev write zeroes read no split ...passed 00:06:51.421 Test: blockdev write zeroes read split ...passed 00:06:51.421 Test: blockdev write zeroes read split partial ...passed 00:06:51.421 Test: blockdev reset ...[2024-11-20 23:55:41.771949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:51.421 [2024-11-20 23:55:41.774320] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:51.421 passed 00:06:51.421 Test: blockdev write read 8 blocks ...passed 00:06:51.421 Test: blockdev write read size > 128k ...passed 00:06:51.421 Test: blockdev write read invalid size ...passed 00:06:51.421 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:51.421 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:51.421 Test: blockdev write read max offset ...passed 00:06:51.421 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:51.421 Test: blockdev writev readv 8 blocks ...passed 00:06:51.421 Test: blockdev writev readv 30 x 1block ...passed 00:06:51.421 Test: blockdev writev readv block ...passed 00:06:51.421 Test: blockdev writev readv size > 128k ...passed 00:06:51.421 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:51.421 Test: blockdev comparev and writev ...[2024-11-20 23:55:41.788761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:51.421 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c8236000 len:0x1000 00:06:51.421 [2024-11-20 23:55:41.788887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:51.421 passed 00:06:51.421 Test: blockdev nvme passthru vendor specific ...passed 00:06:51.421 Test: blockdev nvme admin passthru ...[2024-11-20 23:55:41.789638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:51.421 [2024-11-20 23:55:41.789672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:51.421 passed 00:06:51.421 Test: blockdev copy ...passed 00:06:51.421 Suite: bdevio tests on: Nvme0n1 00:06:51.421 Test: blockdev write read block ...passed 00:06:51.421 Test: blockdev write zeroes read block ...passed 00:06:51.421 Test: blockdev write zeroes read no split ...passed 00:06:51.421 Test: blockdev write zeroes read split ...passed 00:06:51.421 Test: blockdev write zeroes read split partial ...passed 00:06:51.421 Test: blockdev reset ...[2024-11-20 23:55:41.807756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:51.421 [2024-11-20 23:55:41.809576] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:51.421 passed 00:06:51.421 Test: blockdev write read 8 blocks ...passed 00:06:51.421 Test: blockdev write read size > 128k ...passed 00:06:51.421 Test: blockdev write read invalid size ...passed 00:06:51.421 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:51.421 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:51.421 Test: blockdev write read max offset ...passed 00:06:51.421 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:51.421 Test: blockdev writev readv 8 blocks ...passed 00:06:51.421 Test: blockdev writev readv 30 x 1block ...passed 00:06:51.421 Test: blockdev writev readv block ...passed 00:06:51.421 Test: blockdev writev readv size > 128k ...passed 00:06:51.421 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:51.421 Test: blockdev comparev and writev ...passed 00:06:51.421 Test: blockdev nvme passthru rw ...[2024-11-20 23:55:41.822609] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:51.421 separate metadata which is not supported yet. 00:06:51.421 passed 00:06:51.421 Test: blockdev nvme passthru vendor specific ...[2024-11-20 23:55:41.823909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:06:51.421 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:51.421 [2024-11-20 23:55:41.824062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:51.421 passed 00:06:51.421 Test: blockdev copy ...passed 00:06:51.421 00:06:51.421 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.421 suites 6 6 n/a 0 0 00:06:51.421 tests 138 138 138 0 0 00:06:51.421 asserts 893 893 893 0 n/a 00:06:51.421 00:06:51.421 Elapsed time = 0.472 seconds 00:06:51.421 0 00:06:51.681 23:55:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71926 00:06:51.681 23:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71926 ']' 00:06:51.681 23:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71926 00:06:51.681 23:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:51.681 23:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.681 23:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71926 00:06:51.681 23:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.681 23:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.681 23:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71926' 00:06:51.681 killing process with pid 71926 00:06:51.681 23:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71926 00:06:51.681 23:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71926 00:06:51.681 23:55:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:51.681 00:06:51.681 real 0m1.397s 00:06:51.681 user 0m3.525s 00:06:51.681 sys 0m0.257s 00:06:51.681 23:55:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.681 23:55:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:51.681 ************************************ 00:06:51.681 END TEST bdev_bounds 00:06:51.681 ************************************ 00:06:51.681 23:55:42 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:51.681 23:55:42 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:51.681 23:55:42 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.681 23:55:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.681 ************************************ 00:06:51.681 START TEST bdev_nbd 00:06:51.681 ************************************ 00:06:51.681 23:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:51.681 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:51.681 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:51.681 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.681 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:51.681 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.681 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:51.681 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:51.681 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71975 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71975 /var/tmp/spdk-nbd.sock 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 71975 ']' 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:51.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.682 23:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:51.940 [2024-11-20 23:55:42.151400] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:51.940 [2024-11-20 23:55:42.151510] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.940 [2024-11-20 23:55:42.286628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.940 [2024-11-20 23:55:42.319745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.874 23:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.874 23:55:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:52.874 23:55:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:52.874 23:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.874 23:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:52.874 23:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:52.875 23:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:52.875 23:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.875 23:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:52.875 23:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:52.875 23:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:52.875 23:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:52.875 23:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:52.875 23:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:52.875 23:55:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.875 1+0 records in 00:06:52.875 1+0 records out 00:06:52.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00083297 s, 4.9 MB/s 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:52.875 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.133 1+0 records in 00:06:53.133 1+0 records out 00:06:53.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448883 s, 9.1 MB/s 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:53.133 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:53.396 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:53.396 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:53.396 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:53.396 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:53.396 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:53.396 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:53.396 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:53.396 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:53.396 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:53.396 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:53.396 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:53.396 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.396 1+0 records in 00:06:53.396 1+0 records out 00:06:53.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000803621 s, 5.1 MB/s 00:06:53.397 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.397 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:53.397 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.397 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:53.397 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:53.397 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:53.397 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:53.397 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.658 1+0 records in 00:06:53.658 1+0 records out 00:06:53.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490459 s, 8.4 MB/s 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:53.658 23:55:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.917 1+0 records in 00:06:53.917 1+0 records out 00:06:53.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000750377 s, 5.5 MB/s 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:53.917 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.176 1+0 records in 00:06:54.176 1+0 records out 00:06:54.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011094 s, 3.7 MB/s 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:54.176 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.445 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:54.445 { 00:06:54.445 "nbd_device": "/dev/nbd0", 00:06:54.445 "bdev_name": "Nvme0n1" 00:06:54.445 }, 00:06:54.445 { 00:06:54.445 "nbd_device": "/dev/nbd1", 00:06:54.445 "bdev_name": "Nvme1n1" 00:06:54.445 }, 00:06:54.445 { 00:06:54.445 "nbd_device": "/dev/nbd2", 00:06:54.445 "bdev_name": "Nvme2n1" 00:06:54.445 }, 00:06:54.445 { 00:06:54.445 "nbd_device": "/dev/nbd3", 00:06:54.445 "bdev_name": "Nvme2n2" 00:06:54.445 }, 00:06:54.445 { 00:06:54.445 "nbd_device": "/dev/nbd4", 00:06:54.445 "bdev_name": "Nvme2n3" 00:06:54.445 }, 00:06:54.445 { 00:06:54.445 "nbd_device": "/dev/nbd5", 00:06:54.445 "bdev_name": "Nvme3n1" 00:06:54.445 } 00:06:54.445 ]' 00:06:54.445 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:54.445 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:54.445 { 00:06:54.445 "nbd_device": "/dev/nbd0", 00:06:54.445 "bdev_name": "Nvme0n1" 00:06:54.445 }, 00:06:54.445 { 00:06:54.445 "nbd_device": "/dev/nbd1", 00:06:54.445 "bdev_name": "Nvme1n1" 00:06:54.445 }, 00:06:54.445 { 00:06:54.445 "nbd_device": "/dev/nbd2", 00:06:54.445 "bdev_name": "Nvme2n1" 00:06:54.445 }, 00:06:54.445 { 00:06:54.445 "nbd_device": "/dev/nbd3", 00:06:54.445 "bdev_name": "Nvme2n2" 00:06:54.445 }, 00:06:54.445 { 00:06:54.445 "nbd_device": "/dev/nbd4", 00:06:54.445 "bdev_name": "Nvme2n3" 00:06:54.445 }, 00:06:54.445 { 00:06:54.445 "nbd_device": "/dev/nbd5", 00:06:54.445 "bdev_name": "Nvme3n1" 00:06:54.445 } 00:06:54.445 ]' 00:06:54.445 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:54.445 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:54.445 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.445 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:54.445 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.445 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:54.445 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.445 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.732 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.732 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.732 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.732 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.732 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.732 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.732 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.732 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.732 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.732 23:55:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.732 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.732 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.732 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.732 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.732 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.732 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.732 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.732 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.732 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.732 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:54.991 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:54.991 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:54.991 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:54.991 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.991 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.991 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:54.991 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.991 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.991 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.991 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:55.249 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:55.249 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:55.249 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:55.249 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.249 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.249 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:55.249 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.249 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.249 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.249 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:55.508 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:55.508 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:55.508 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:55.508 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.508 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.508 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:55.508 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.508 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.508 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.508 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:55.766 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:55.766 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:55.766 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:55.766 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.766 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.766 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:55.766 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.766 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.766 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.766 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.766 23:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.766 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:56.025 /dev/nbd0 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.025 1+0 records in 00:06:56.025 1+0 records out 00:06:56.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642341 s, 6.4 MB/s 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:56.025 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:56.284 /dev/nbd1 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.284 1+0 records in 00:06:56.284 1+0 records out 00:06:56.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690137 s, 5.9 MB/s 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:56.284 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:56.543 /dev/nbd10 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.543 1+0 records in 00:06:56.543 1+0 records out 00:06:56.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0009567 s, 4.3 MB/s 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:56.543 23:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:56.804 /dev/nbd11 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.804 1+0 records in 00:06:56.804 1+0 records out 00:06:56.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129161 s, 3.2 MB/s 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:56.804 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:57.064 /dev/nbd12 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.064 1+0 records in 00:06:57.064 1+0 records out 00:06:57.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480662 s, 8.5 MB/s 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:57.064 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:57.322 /dev/nbd13 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.322 1+0 records in 00:06:57.322 1+0 records out 00:06:57.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483825 s, 8.5 MB/s 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.322 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:57.581 { 00:06:57.581 "nbd_device": "/dev/nbd0", 00:06:57.581 "bdev_name": "Nvme0n1" 00:06:57.581 }, 00:06:57.581 { 00:06:57.581 "nbd_device": "/dev/nbd1", 00:06:57.581 "bdev_name": "Nvme1n1" 00:06:57.581 }, 00:06:57.581 { 00:06:57.581 "nbd_device": "/dev/nbd10", 00:06:57.581 "bdev_name": "Nvme2n1" 00:06:57.581 }, 00:06:57.581 { 00:06:57.581 "nbd_device": "/dev/nbd11", 00:06:57.581 "bdev_name": "Nvme2n2" 00:06:57.581 }, 00:06:57.581 { 00:06:57.581 "nbd_device": "/dev/nbd12", 00:06:57.581 "bdev_name": "Nvme2n3" 00:06:57.581 }, 00:06:57.581 { 00:06:57.581 "nbd_device": "/dev/nbd13", 00:06:57.581 "bdev_name": "Nvme3n1" 00:06:57.581 } 00:06:57.581 ]' 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:57.581 { 00:06:57.581 "nbd_device": "/dev/nbd0", 00:06:57.581 "bdev_name": "Nvme0n1" 00:06:57.581 }, 00:06:57.581 { 00:06:57.581 "nbd_device": "/dev/nbd1", 00:06:57.581 "bdev_name": "Nvme1n1" 00:06:57.581 }, 00:06:57.581 { 00:06:57.581 "nbd_device": "/dev/nbd10", 00:06:57.581 "bdev_name": "Nvme2n1" 00:06:57.581 }, 00:06:57.581 { 00:06:57.581 "nbd_device": "/dev/nbd11", 00:06:57.581 "bdev_name": "Nvme2n2" 00:06:57.581 }, 00:06:57.581 { 00:06:57.581 "nbd_device": "/dev/nbd12", 00:06:57.581 "bdev_name": "Nvme2n3" 00:06:57.581 }, 00:06:57.581 { 00:06:57.581 "nbd_device": "/dev/nbd13", 00:06:57.581 "bdev_name": "Nvme3n1" 00:06:57.581 } 00:06:57.581 ]' 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:57.581 /dev/nbd1 00:06:57.581 /dev/nbd10 00:06:57.581 /dev/nbd11 00:06:57.581 /dev/nbd12 00:06:57.581 /dev/nbd13' 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:57.581 /dev/nbd1 00:06:57.581 /dev/nbd10 00:06:57.581 /dev/nbd11 00:06:57.581 /dev/nbd12 00:06:57.581 /dev/nbd13' 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:57.581 256+0 records in 00:06:57.581 256+0 records out 00:06:57.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00604966 s, 173 MB/s 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:57.581 256+0 records in 00:06:57.581 256+0 records out 00:06:57.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0609298 s, 17.2 MB/s 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:57.581 256+0 records in 00:06:57.581 256+0 records out 00:06:57.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0754457 s, 13.9 MB/s 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.581 23:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:57.840 256+0 records in 00:06:57.840 256+0 records out 00:06:57.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0627719 s, 16.7 MB/s 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:57.840 256+0 records in 00:06:57.840 256+0 records out 00:06:57.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0600229 s, 17.5 MB/s 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:57.840 256+0 records in 00:06:57.840 256+0 records out 00:06:57.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0606311 s, 17.3 MB/s 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:57.840 256+0 records in 00:06:57.840 256+0 records out 00:06:57.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0610809 s, 17.2 MB/s 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.840 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:58.098 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:58.098 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:58.098 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:58.099 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.099 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.099 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:58.099 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.099 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.099 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.099 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:58.357 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:58.357 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:58.357 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:58.357 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.357 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.357 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:58.357 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.357 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.357 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.357 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:58.614 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:58.614 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:58.614 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:58.614 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.614 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.614 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:58.614 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.614 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.614 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.614 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:58.614 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:58.614 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:58.615 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:58.615 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.615 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.615 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:58.615 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.615 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.615 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.615 23:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:58.872 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:58.872 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:58.872 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:58.872 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.872 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.872 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:58.872 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.872 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.872 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.872 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.131 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:59.389 malloc_lvol_verify 00:06:59.389 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:59.649 67b4963b-cdbe-49b2-807f-a86a94bb2691 00:06:59.649 23:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:59.906 461c7940-f9ae-4f89-90f5-ba495a155ea8 00:06:59.906 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:00.164 /dev/nbd0 00:07:00.164 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:00.164 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:00.164 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:00.164 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:00.164 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:00.164 mke2fs 1.47.0 (5-Feb-2023) 00:07:00.164 Discarding device blocks: 0/4096 done 00:07:00.164 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:00.164 00:07:00.164 Allocating group tables: 0/1 done 00:07:00.164 Writing inode tables: 0/1 done 00:07:00.164 Creating journal (1024 blocks): done 00:07:00.164 Writing superblocks and filesystem accounting information: 0/1 done 00:07:00.164 00:07:00.164 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:00.164 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.164 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:00.164 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.164 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:00.164 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.164 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71975 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 71975 ']' 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 71975 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71975 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.422 killing process with pid 71975 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71975' 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 71975 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 71975 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:00.422 00:07:00.422 real 0m8.732s 00:07:00.422 user 0m12.813s 00:07:00.422 sys 0m2.865s 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.422 23:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:00.422 ************************************ 00:07:00.422 END TEST bdev_nbd 00:07:00.422 ************************************ 00:07:00.680 23:55:50 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:00.680 23:55:50 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:00.680 skipping fio tests on NVMe due to multi-ns failures. 00:07:00.680 23:55:50 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:00.680 23:55:50 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:00.680 23:55:50 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:00.680 23:55:50 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:00.680 23:55:50 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.680 23:55:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.680 ************************************ 00:07:00.680 START TEST bdev_verify 00:07:00.680 ************************************ 00:07:00.680 23:55:50 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:00.680 [2024-11-20 23:55:50.944082] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:00.680 [2024-11-20 23:55:50.944192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72336 ] 00:07:00.680 [2024-11-20 23:55:51.079607] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.938 [2024-11-20 23:55:51.111337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.938 [2024-11-20 23:55:51.111411] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.195 Running I/O for 5 seconds... 00:07:03.614 24256.00 IOPS, 94.75 MiB/s [2024-11-20T23:55:54.975Z] 23008.00 IOPS, 89.88 MiB/s [2024-11-20T23:55:55.919Z] 22272.00 IOPS, 87.00 MiB/s [2024-11-20T23:55:56.861Z] 21808.00 IOPS, 85.19 MiB/s [2024-11-20T23:55:56.861Z] 21360.00 IOPS, 83.44 MiB/s 00:07:06.440 Latency(us) 00:07:06.440 [2024-11-20T23:55:56.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.440 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.440 Verification LBA range: start 0x0 length 0xbd0bd 00:07:06.440 Nvme0n1 : 5.05 1750.53 6.84 0.00 0.00 72732.85 9628.75 123409.33 00:07:06.440 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.440 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:06.440 Nvme0n1 : 5.07 1767.02 6.90 0.00 0.00 71709.07 9729.58 98001.53 00:07:06.440 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.440 Verification LBA range: start 0x0 length 0xa0000 00:07:06.440 Nvme1n1 : 5.06 1758.69 6.87 0.00 0.00 72538.33 12905.55 125829.12 00:07:06.440 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.440 Verification LBA range: start 0xa0000 length 0xa0000 00:07:06.440 Nvme1n1 : 5.08 1777.38 6.94 0.00 0.00 71237.72 4411.08 106470.79 00:07:06.440 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.440 Verification LBA range: start 0x0 length 0x80000 00:07:06.440 Nvme2n1 : 5.06 1758.24 6.87 0.00 0.00 72429.02 5444.53 125829.12 00:07:06.440 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.440 Verification LBA range: start 0x80000 length 0x80000 00:07:06.440 Nvme2n1 : 5.06 1771.05 6.92 0.00 0.00 72073.49 11342.77 98001.53 00:07:06.440 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.440 Verification LBA range: start 0x0 length 0x80000 00:07:06.440 Nvme2n2 : 5.06 1762.15 6.88 0.00 0.00 72123.33 7662.67 106470.79 00:07:06.440 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.440 Verification LBA range: start 0x80000 length 0x80000 00:07:06.440 Nvme2n2 : 5.05 1761.59 6.88 0.00 0.00 72291.03 10132.87 108890.58 00:07:06.440 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.440 Verification LBA range: start 0x0 length 0x80000 00:07:06.440 Nvme2n3 : 5.06 1761.52 6.88 0.00 0.00 72032.90 8771.74 105664.20 00:07:06.440 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.440 Verification LBA range: start 0x80000 length 0x80000 00:07:06.440 Nvme2n3 : 5.06 1759.09 6.87 0.00 0.00 72134.76 7612.26 109697.18 00:07:06.440 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.440 Verification LBA range: start 0x0 length 0x20000 00:07:06.440 Nvme3n1 : 5.07 1768.27 6.91 0.00 0.00 71722.13 734.13 105664.20 00:07:06.440 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.440 Verification LBA range: start 0x20000 length 0x20000 00:07:06.440 Nvme3n1 : 5.07 1767.50 6.90 0.00 0.00 71735.91 4637.93 112116.97 00:07:06.440 [2024-11-20T23:55:56.861Z] =================================================================================================================== 00:07:06.440 [2024-11-20T23:55:56.861Z] Total : 21163.02 82.67 0.00 0.00 72061.46 734.13 125829.12 00:07:07.010 00:07:07.010 real 0m6.486s 00:07:07.010 user 0m12.229s 00:07:07.010 sys 0m0.201s 00:07:07.010 ************************************ 00:07:07.010 END TEST bdev_verify 00:07:07.010 ************************************ 00:07:07.010 23:55:57 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.010 23:55:57 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:07.010 23:55:57 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:07.010 23:55:57 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:07.010 23:55:57 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.010 23:55:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.271 ************************************ 00:07:07.271 START TEST bdev_verify_big_io 00:07:07.271 ************************************ 00:07:07.271 23:55:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:07.271 [2024-11-20 23:55:57.495088] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:07.271 [2024-11-20 23:55:57.495198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72429 ] 00:07:07.271 [2024-11-20 23:55:57.627067] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.271 [2024-11-20 23:55:57.683671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.271 [2024-11-20 23:55:57.683797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.843 Running I/O for 5 seconds... 00:07:11.773 1122.00 IOPS, 70.12 MiB/s [2024-11-20T23:56:03.579Z] 2006.50 IOPS, 125.41 MiB/s [2024-11-20T23:56:04.150Z] 1888.00 IOPS, 118.00 MiB/s [2024-11-20T23:56:04.409Z] 2263.00 IOPS, 141.44 MiB/s 00:07:13.988 Latency(us) 00:07:13.988 [2024-11-20T23:56:04.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.988 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.988 Verification LBA range: start 0x0 length 0xbd0b 00:07:13.988 Nvme0n1 : 5.69 116.28 7.27 0.00 0.00 1047866.42 35691.91 1548666.09 00:07:13.988 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.988 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:13.988 Nvme0n1 : 5.46 134.32 8.40 0.00 0.00 922964.88 41539.74 1032444.06 00:07:13.988 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.988 Verification LBA range: start 0x0 length 0xa000 00:07:13.988 Nvme1n1 : 5.77 119.83 7.49 0.00 0.00 990108.52 56058.49 1574477.19 00:07:13.988 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.988 Verification LBA range: start 0xa000 length 0xa000 00:07:13.988 Nvme1n1 : 5.62 136.63 8.54 0.00 0.00 877982.85 81869.59 851766.35 00:07:13.988 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.988 Verification LBA range: start 0x0 length 0x8000 00:07:13.988 Nvme2n1 : 5.77 124.11 7.76 0.00 0.00 932668.81 71787.13 1606741.07 00:07:13.988 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.988 Verification LBA range: start 0x8000 length 0x8000 00:07:13.988 Nvme2n1 : 5.77 137.15 8.57 0.00 0.00 841129.67 144380.85 845313.58 00:07:13.988 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.988 Verification LBA range: start 0x0 length 0x8000 00:07:13.988 Nvme2n2 : 5.85 127.87 7.99 0.00 0.00 872886.41 77433.30 1632552.17 00:07:13.988 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.988 Verification LBA range: start 0x8000 length 0x8000 00:07:13.988 Nvme2n2 : 5.88 148.53 9.28 0.00 0.00 766394.67 38716.65 871124.68 00:07:13.988 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.988 Verification LBA range: start 0x0 length 0x8000 00:07:13.988 Nvme2n3 : 5.90 138.29 8.64 0.00 0.00 786389.40 10132.87 1677721.60 00:07:13.988 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.988 Verification LBA range: start 0x8000 length 0x8000 00:07:13.988 Nvme2n3 : 5.89 147.83 9.24 0.00 0.00 744246.75 38716.65 896935.78 00:07:13.988 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.988 Verification LBA range: start 0x0 length 0x2000 00:07:13.988 Nvme3n1 : 5.98 193.17 12.07 0.00 0.00 548699.77 274.12 922746.88 00:07:13.988 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.989 Verification LBA range: start 0x2000 length 0x2000 00:07:13.989 Nvme3n1 : 5.89 162.93 10.18 0.00 0.00 660222.29 3012.14 903388.55 00:07:13.989 [2024-11-20T23:56:04.410Z] =================================================================================================================== 00:07:13.989 [2024-11-20T23:56:04.410Z] Total : 1686.94 105.43 0.00 0.00 811646.31 274.12 1677721.60 00:07:14.933 00:07:14.933 real 0m7.571s 00:07:14.933 user 0m14.299s 00:07:14.933 sys 0m0.296s 00:07:14.933 23:56:05 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.933 ************************************ 00:07:14.933 END TEST bdev_verify_big_io 00:07:14.933 ************************************ 00:07:14.933 23:56:05 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:14.933 23:56:05 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:14.933 23:56:05 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:14.933 23:56:05 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.933 23:56:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:14.933 ************************************ 00:07:14.933 START TEST bdev_write_zeroes 00:07:14.933 ************************************ 00:07:14.933 23:56:05 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:14.933 [2024-11-20 23:56:05.133882] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:14.933 [2024-11-20 23:56:05.134042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72527 ] 00:07:14.933 [2024-11-20 23:56:05.272689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.933 [2024-11-20 23:56:05.325907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.506 Running I/O for 1 seconds... 00:07:16.449 64497.00 IOPS, 251.94 MiB/s 00:07:16.449 Latency(us) 00:07:16.449 [2024-11-20T23:56:06.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.449 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.449 Nvme0n1 : 1.02 10694.46 41.78 0.00 0.00 11939.43 4839.58 26214.40 00:07:16.449 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.449 Nvme1n1 : 1.02 10696.28 41.78 0.00 0.00 11922.74 9376.69 22383.06 00:07:16.449 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.449 Nvme2n1 : 1.02 10684.10 41.73 0.00 0.00 11902.24 9477.51 21677.29 00:07:16.449 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.449 Nvme2n2 : 1.03 10672.00 41.69 0.00 0.00 11894.40 9326.28 21173.17 00:07:16.449 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.449 Nvme2n3 : 1.03 10659.82 41.64 0.00 0.00 11885.28 9326.28 20870.70 00:07:16.449 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.449 Nvme3n1 : 1.03 10647.75 41.59 0.00 0.00 11878.00 9225.45 22584.71 00:07:16.449 [2024-11-20T23:56:06.870Z] =================================================================================================================== 00:07:16.449 [2024-11-20T23:56:06.870Z] Total : 64054.41 250.21 0.00 0.00 11903.67 4839.58 26214.40 00:07:16.710 00:07:16.710 real 0m1.895s 00:07:16.710 user 0m1.570s 00:07:16.710 sys 0m0.210s 00:07:16.710 23:56:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.710 ************************************ 00:07:16.710 END TEST bdev_write_zeroes 00:07:16.710 ************************************ 00:07:16.710 23:56:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:16.710 23:56:07 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:16.710 23:56:07 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:16.710 23:56:07 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.710 23:56:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:16.710 ************************************ 00:07:16.710 START TEST bdev_json_nonenclosed 00:07:16.710 ************************************ 00:07:16.710 23:56:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:16.710 [2024-11-20 23:56:07.088933] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:16.710 [2024-11-20 23:56:07.089058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72571 ] 00:07:16.972 [2024-11-20 23:56:07.224979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.972 [2024-11-20 23:56:07.276233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.972 [2024-11-20 23:56:07.276387] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:16.972 [2024-11-20 23:56:07.276408] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:16.972 [2024-11-20 23:56:07.276421] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.972 00:07:16.972 real 0m0.356s 00:07:16.972 user 0m0.148s 00:07:16.972 sys 0m0.103s 00:07:16.972 23:56:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.972 ************************************ 00:07:16.972 END TEST bdev_json_nonenclosed 00:07:16.972 ************************************ 00:07:16.972 23:56:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:17.231 23:56:07 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:17.231 23:56:07 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:17.231 23:56:07 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.231 23:56:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.231 ************************************ 00:07:17.231 START TEST bdev_json_nonarray 00:07:17.231 ************************************ 00:07:17.231 23:56:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:17.231 [2024-11-20 23:56:07.517748] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:17.231 [2024-11-20 23:56:07.517882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72591 ] 00:07:17.231 [2024-11-20 23:56:07.645697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.489 [2024-11-20 23:56:07.678696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.489 [2024-11-20 23:56:07.678800] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:17.489 [2024-11-20 23:56:07.678818] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:17.489 [2024-11-20 23:56:07.678828] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.489 00:07:17.489 real 0m0.305s 00:07:17.489 user 0m0.120s 00:07:17.489 sys 0m0.082s 00:07:17.489 23:56:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.489 ************************************ 00:07:17.489 END TEST bdev_json_nonarray 00:07:17.489 ************************************ 00:07:17.489 23:56:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:17.489 23:56:07 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:17.489 23:56:07 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:17.489 23:56:07 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:17.489 23:56:07 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:17.489 23:56:07 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:17.489 23:56:07 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:17.489 23:56:07 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:17.489 23:56:07 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:17.489 23:56:07 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:17.489 23:56:07 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:17.489 23:56:07 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:17.489 00:07:17.489 real 0m29.850s 00:07:17.489 user 0m47.250s 00:07:17.489 sys 0m4.839s 00:07:17.489 23:56:07 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.489 ************************************ 00:07:17.489 END TEST blockdev_nvme 00:07:17.489 23:56:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.489 ************************************ 00:07:17.489 23:56:07 -- spdk/autotest.sh@209 -- # uname -s 00:07:17.489 23:56:07 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:17.489 23:56:07 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:17.489 23:56:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:17.489 23:56:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.489 23:56:07 -- common/autotest_common.sh@10 -- # set +x 00:07:17.489 ************************************ 00:07:17.489 START TEST blockdev_nvme_gpt 00:07:17.489 ************************************ 00:07:17.489 23:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:17.748 * Looking for test storage... 00:07:17.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:17.748 23:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:17.748 23:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:17.748 23:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:17.748 23:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.748 23:56:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:17.748 23:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.748 23:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:17.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.748 --rc genhtml_branch_coverage=1 00:07:17.748 --rc genhtml_function_coverage=1 00:07:17.748 --rc genhtml_legend=1 00:07:17.748 --rc geninfo_all_blocks=1 00:07:17.748 --rc geninfo_unexecuted_blocks=1 00:07:17.748 00:07:17.748 ' 00:07:17.748 23:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:17.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.748 --rc genhtml_branch_coverage=1 00:07:17.748 --rc genhtml_function_coverage=1 00:07:17.748 --rc genhtml_legend=1 00:07:17.748 --rc geninfo_all_blocks=1 00:07:17.748 --rc geninfo_unexecuted_blocks=1 00:07:17.748 00:07:17.748 ' 00:07:17.748 23:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:17.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.748 --rc genhtml_branch_coverage=1 00:07:17.748 --rc genhtml_function_coverage=1 00:07:17.748 --rc genhtml_legend=1 00:07:17.748 --rc geninfo_all_blocks=1 00:07:17.748 --rc geninfo_unexecuted_blocks=1 00:07:17.748 00:07:17.748 ' 00:07:17.748 23:56:07 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:17.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.749 --rc genhtml_branch_coverage=1 00:07:17.749 --rc genhtml_function_coverage=1 00:07:17.749 --rc genhtml_legend=1 00:07:17.749 --rc geninfo_all_blocks=1 00:07:17.749 --rc geninfo_unexecuted_blocks=1 00:07:17.749 00:07:17.749 ' 00:07:17.749 23:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:17.749 23:56:07 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:17.749 23:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:17.749 23:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:17.749 23:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:17.749 23:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:17.749 23:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:17.749 23:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:17.749 23:56:07 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72675 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72675 00:07:17.749 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:17.749 23:56:08 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 72675 ']' 00:07:17.749 23:56:08 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.749 23:56:08 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.749 23:56:08 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.749 23:56:08 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.749 23:56:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:17.749 [2024-11-20 23:56:08.081769] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:17.749 [2024-11-20 23:56:08.081888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72675 ] 00:07:18.007 [2024-11-20 23:56:08.217578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.007 [2024-11-20 23:56:08.250139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.575 23:56:08 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.575 23:56:08 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:18.575 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:18.575 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:18.575 23:56:08 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:18.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:19.091 Waiting for block devices as requested 00:07:19.091 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:19.091 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:19.349 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:19.349 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:24.630 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:24.630 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:24.630 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:24.630 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:24.630 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:24.630 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:24.631 BYT; 00:07:24.631 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:24.631 BYT; 00:07:24.631 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:24.631 23:56:14 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:24.631 23:56:14 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:25.564 The operation has completed successfully. 00:07:25.564 23:56:15 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:26.497 The operation has completed successfully. 00:07:26.497 23:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:27.063 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:27.320 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:27.320 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:27.320 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:27.320 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:27.578 23:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:27.578 23:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.578 23:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.578 [] 00:07:27.578 23:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.578 23:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:27.578 23:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:27.578 23:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:27.578 23:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:27.578 23:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:27.578 23:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.578 23:56:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.837 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.837 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:27.837 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.837 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.837 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.837 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:27.837 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:27.837 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.837 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.837 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:27.837 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:27.838 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "90aa6e5b-12c9-4333-98c2-ffcd910e9ba2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "90aa6e5b-12c9-4333-98c2-ffcd910e9ba2",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "032a4962-795c-4510-9f56-ffebfa71bd7c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "032a4962-795c-4510-9f56-ffebfa71bd7c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "23edd743-675a-4997-b18e-4b8811c2b667"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "23edd743-675a-4997-b18e-4b8811c2b667",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "2d8758b1-dcb6-4765-9e3f-d49d3a1e9ec2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2d8758b1-dcb6-4765-9e3f-d49d3a1e9ec2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3254c6de-c719-40d7-b584-1062285222d2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3254c6de-c719-40d7-b584-1062285222d2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:27.838 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:27.838 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:27.838 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:27.838 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72675 00:07:27.838 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 72675 ']' 00:07:27.838 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 72675 00:07:27.838 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:27.838 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.096 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72675 00:07:28.096 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.096 killing process with pid 72675 00:07:28.096 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.096 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72675' 00:07:28.096 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 72675 00:07:28.096 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 72675 00:07:28.096 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:28.096 23:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:28.096 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:28.096 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.096 23:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:28.354 ************************************ 00:07:28.354 START TEST bdev_hello_world 00:07:28.354 ************************************ 00:07:28.354 23:56:18 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:28.354 [2024-11-20 23:56:18.583796] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:28.354 [2024-11-20 23:56:18.583905] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73281 ] 00:07:28.354 [2024-11-20 23:56:18.736047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.613 [2024-11-20 23:56:18.781905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.871 [2024-11-20 23:56:19.155472] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:28.871 [2024-11-20 23:56:19.155505] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:28.871 [2024-11-20 23:56:19.155523] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:28.871 [2024-11-20 23:56:19.157099] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:28.871 [2024-11-20 23:56:19.157505] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:28.871 [2024-11-20 23:56:19.157530] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:28.871 [2024-11-20 23:56:19.157737] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:28.871 00:07:28.871 [2024-11-20 23:56:19.157757] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:28.871 ************************************ 00:07:28.871 END TEST bdev_hello_world 00:07:28.871 ************************************ 00:07:28.871 00:07:28.871 real 0m0.763s 00:07:28.871 user 0m0.523s 00:07:28.871 sys 0m0.138s 00:07:28.871 23:56:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.871 23:56:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:29.128 23:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:29.128 23:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:29.128 23:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.128 23:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.128 ************************************ 00:07:29.128 START TEST bdev_bounds 00:07:29.128 ************************************ 00:07:29.128 Process bdevio pid: 73307 00:07:29.128 23:56:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:29.128 23:56:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73307 00:07:29.128 23:56:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:29.128 23:56:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73307' 00:07:29.128 23:56:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73307 00:07:29.128 23:56:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73307 ']' 00:07:29.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.128 23:56:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.128 23:56:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.128 23:56:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:29.128 23:56:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.128 23:56:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.128 23:56:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:29.128 [2024-11-20 23:56:19.404386] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:29.128 [2024-11-20 23:56:19.404521] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73307 ] 00:07:29.128 [2024-11-20 23:56:19.539161] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.386 [2024-11-20 23:56:19.569594] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.386 [2024-11-20 23:56:19.569924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.386 [2024-11-20 23:56:19.569997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.953 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.953 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:29.953 23:56:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:29.953 I/O targets: 00:07:29.953 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:29.953 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:29.953 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:29.953 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:29.953 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:29.953 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:29.953 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:29.953 00:07:29.953 00:07:29.953 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.953 http://cunit.sourceforge.net/ 00:07:29.953 00:07:29.953 00:07:29.953 Suite: bdevio tests on: Nvme3n1 00:07:29.953 Test: blockdev write read block ...passed 00:07:29.953 Test: blockdev write zeroes read block ...passed 00:07:29.953 Test: blockdev write zeroes read no split ...passed 00:07:29.953 Test: blockdev write zeroes read split ...passed 00:07:29.953 Test: blockdev write zeroes read split partial ...passed 00:07:29.953 Test: blockdev reset ...[2024-11-20 23:56:20.308014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:29.953 [2024-11-20 23:56:20.310140] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:29.953 passed 00:07:29.953 Test: blockdev write read 8 blocks ...passed 00:07:29.953 Test: blockdev write read size > 128k ...passed 00:07:29.953 Test: blockdev write read invalid size ...passed 00:07:29.953 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:29.953 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:29.953 Test: blockdev write read max offset ...passed 00:07:29.953 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:29.953 Test: blockdev writev readv 8 blocks ...passed 00:07:29.953 Test: blockdev writev readv 30 x 1block ...passed 00:07:29.953 Test: blockdev writev readv block ...passed 00:07:29.953 Test: blockdev writev readv size > 128k ...passed 00:07:29.953 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:29.953 Test: blockdev comparev and writev ...[2024-11-20 23:56:20.316580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c320a000 len:0x1000 00:07:29.953 [2024-11-20 23:56:20.316626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:29.953 passed 00:07:29.953 Test: blockdev nvme passthru rw ...passed 00:07:29.953 Test: blockdev nvme passthru vendor specific ...[2024-11-20 23:56:20.317549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:29.953 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:29.953 [2024-11-20 23:56:20.317669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:29.953 passed 00:07:29.953 Test: blockdev copy ...passed 00:07:29.953 Suite: bdevio tests on: Nvme2n3 00:07:29.953 Test: blockdev write read block ...passed 00:07:29.953 Test: blockdev write zeroes read block ...passed 00:07:29.953 Test: blockdev write zeroes read no split ...passed 00:07:29.953 Test: blockdev write zeroes read split ...passed 00:07:29.953 Test: blockdev write zeroes read split partial ...passed 00:07:29.953 Test: blockdev reset ...[2024-11-20 23:56:20.329333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:29.953 passed 00:07:29.954 Test: blockdev write read 8 blocks ...[2024-11-20 23:56:20.330927] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:29.954 passed 00:07:29.954 Test: blockdev write read size > 128k ...passed 00:07:29.954 Test: blockdev write read invalid size ...passed 00:07:29.954 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:29.954 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:29.954 Test: blockdev write read max offset ...passed 00:07:29.954 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:29.954 Test: blockdev writev readv 8 blocks ...passed 00:07:29.954 Test: blockdev writev readv 30 x 1block ...passed 00:07:29.954 Test: blockdev writev readv block ...passed 00:07:29.954 Test: blockdev writev readv size > 128k ...passed 00:07:29.954 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:29.954 Test: blockdev comparev and writev ...[2024-11-20 23:56:20.335582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1804000 len:0x1000 00:07:29.954 [2024-11-20 23:56:20.335621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:29.954 passed 00:07:29.954 Test: blockdev nvme passthru rw ...passed 00:07:29.954 Test: blockdev nvme passthru vendor specific ...passed 00:07:29.954 Test: blockdev nvme admin passthru ...[2024-11-20 23:56:20.336077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:29.954 [2024-11-20 23:56:20.336106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:29.954 passed 00:07:29.954 Test: blockdev copy ...passed 00:07:29.954 Suite: bdevio tests on: Nvme2n2 00:07:29.954 Test: blockdev write read block ...passed 00:07:29.954 Test: blockdev write zeroes read block ...passed 00:07:29.954 Test: blockdev write zeroes read no split ...passed 00:07:29.954 Test: blockdev write zeroes read split ...passed 00:07:29.954 Test: blockdev write zeroes read split partial ...passed 00:07:29.954 Test: blockdev reset ...[2024-11-20 23:56:20.351106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:29.954 passed 00:07:29.954 Test: blockdev write read 8 blocks ...[2024-11-20 23:56:20.352588] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:29.954 passed 00:07:29.954 Test: blockdev write read size > 128k ...passed 00:07:29.954 Test: blockdev write read invalid size ...passed 00:07:29.954 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:29.954 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:29.954 Test: blockdev write read max offset ...passed 00:07:29.954 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:29.954 Test: blockdev writev readv 8 blocks ...passed 00:07:29.954 Test: blockdev writev readv 30 x 1block ...passed 00:07:29.954 Test: blockdev writev readv block ...passed 00:07:29.954 Test: blockdev writev readv size > 128k ...passed 00:07:29.954 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:29.954 Test: blockdev comparev and writev ...[2024-11-20 23:56:20.356787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:07:29.954 Test: blockdev nvme passthru rw ...passed 00:07:29.954 Test: blockdev nvme passthru vendor specific ...passed 00:07:29.954 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2c1804000 len:0x1000 00:07:29.954 [2024-11-20 23:56:20.356907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:29.954 [2024-11-20 23:56:20.357320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:29.954 [2024-11-20 23:56:20.357344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:29.954 passed 00:07:29.954 Test: blockdev copy ...passed 00:07:29.954 Suite: bdevio tests on: Nvme2n1 00:07:29.954 Test: blockdev write read block ...passed 00:07:29.954 Test: blockdev write zeroes read block ...passed 00:07:29.954 Test: blockdev write zeroes read no split ...passed 00:07:29.954 Test: blockdev write zeroes read split ...passed 00:07:30.213 Test: blockdev write zeroes read split partial ...passed 00:07:30.213 Test: blockdev reset ...[2024-11-20 23:56:20.373780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:30.213 passed 00:07:30.213 Test: blockdev write read 8 blocks ...[2024-11-20 23:56:20.376007] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:30.213 passed 00:07:30.213 Test: blockdev write read size > 128k ...passed 00:07:30.213 Test: blockdev write read invalid size ...passed 00:07:30.213 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:30.213 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:30.213 Test: blockdev write read max offset ...passed 00:07:30.213 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:30.213 Test: blockdev writev readv 8 blocks ...passed 00:07:30.213 Test: blockdev writev readv 30 x 1block ...passed 00:07:30.213 Test: blockdev writev readv block ...passed 00:07:30.213 Test: blockdev writev readv size > 128k ...passed 00:07:30.213 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:30.213 Test: blockdev comparev and writev ...[2024-11-20 23:56:20.380739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1806000 len:0x1000 00:07:30.213 [2024-11-20 23:56:20.380786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:30.213 passed 00:07:30.213 Test: blockdev nvme passthru rw ...passed 00:07:30.213 Test: blockdev nvme passthru vendor specific ...[2024-11-20 23:56:20.381351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:30.213 passed 00:07:30.213 Test: blockdev nvme admin passthru ...[2024-11-20 23:56:20.381377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:30.213 passed 00:07:30.213 Test: blockdev copy ...passed 00:07:30.213 Suite: bdevio tests on: Nvme1n1p2 00:07:30.213 Test: blockdev write read block ...passed 00:07:30.213 Test: blockdev write zeroes read block ...passed 00:07:30.213 Test: blockdev write zeroes read no split ...passed 00:07:30.213 Test: blockdev write zeroes read split ...passed 00:07:30.213 Test: blockdev write zeroes read split partial ...passed 00:07:30.213 Test: blockdev reset ...[2024-11-20 23:56:20.395760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:30.213 [2024-11-20 23:56:20.397149] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:30.213 passed 00:07:30.213 Test: blockdev write read 8 blocks ...passed 00:07:30.213 Test: blockdev write read size > 128k ...passed 00:07:30.213 Test: blockdev write read invalid size ...passed 00:07:30.213 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:30.213 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:30.213 Test: blockdev write read max offset ...passed 00:07:30.213 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:30.213 Test: blockdev writev readv 8 blocks ...passed 00:07:30.213 Test: blockdev writev readv 30 x 1block ...passed 00:07:30.213 Test: blockdev writev readv block ...passed 00:07:30.213 Test: blockdev writev readv size > 128k ...passed 00:07:30.213 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:30.213 Test: blockdev comparev and writev ...[2024-11-20 23:56:20.402029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c1802000 len:0x1000 00:07:30.213 [2024-11-20 23:56:20.402067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:30.213 passed 00:07:30.213 Test: blockdev nvme passthru rw ...passed 00:07:30.213 Test: blockdev nvme passthru vendor specific ...passed 00:07:30.213 Test: blockdev nvme admin passthru ...passed 00:07:30.214 Test: blockdev copy ...passed 00:07:30.214 Suite: bdevio tests on: Nvme1n1p1 00:07:30.214 Test: blockdev write read block ...passed 00:07:30.214 Test: blockdev write zeroes read block ...passed 00:07:30.214 Test: blockdev write zeroes read no split ...passed 00:07:30.214 Test: blockdev write zeroes read split ...passed 00:07:30.214 Test: blockdev write zeroes read split partial ...passed 00:07:30.214 Test: blockdev reset ...[2024-11-20 23:56:20.412723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:30.214 passed 00:07:30.214 Test: blockdev write read 8 blocks ...[2024-11-20 23:56:20.413988] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:30.214 passed 00:07:30.214 Test: blockdev write read size > 128k ...passed 00:07:30.214 Test: blockdev write read invalid size ...passed 00:07:30.214 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:30.214 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:30.214 Test: blockdev write read max offset ...passed 00:07:30.214 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:30.214 Test: blockdev writev readv 8 blocks ...passed 00:07:30.214 Test: blockdev writev readv 30 x 1block ...passed 00:07:30.214 Test: blockdev writev readv block ...passed 00:07:30.214 Test: blockdev writev readv size > 128k ...passed 00:07:30.214 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:30.214 Test: blockdev comparev and writev ...[2024-11-20 23:56:20.418372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c643b000 len:0x1000 00:07:30.214 [2024-11-20 23:56:20.418410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:30.214 passed 00:07:30.214 Test: blockdev nvme passthru rw ...passed 00:07:30.214 Test: blockdev nvme passthru vendor specific ...passed 00:07:30.214 Test: blockdev nvme admin passthru ...passed 00:07:30.214 Test: blockdev copy ...passed 00:07:30.214 Suite: bdevio tests on: Nvme0n1 00:07:30.214 Test: blockdev write read block ...passed 00:07:30.214 Test: blockdev write zeroes read block ...passed 00:07:30.214 Test: blockdev write zeroes read no split ...passed 00:07:30.214 Test: blockdev write zeroes read split ...passed 00:07:30.214 Test: blockdev write zeroes read split partial ...passed 00:07:30.214 Test: blockdev reset ...[2024-11-20 23:56:20.429042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:30.214 passed 00:07:30.214 Test: blockdev write read 8 blocks ...[2024-11-20 23:56:20.430330] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:30.214 passed 00:07:30.214 Test: blockdev write read size > 128k ...passed 00:07:30.214 Test: blockdev write read invalid size ...passed 00:07:30.214 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:30.214 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:30.214 Test: blockdev write read max offset ...passed 00:07:30.214 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:30.214 Test: blockdev writev readv 8 blocks ...passed 00:07:30.214 Test: blockdev writev readv 30 x 1block ...passed 00:07:30.214 Test: blockdev writev readv block ...passed 00:07:30.214 Test: blockdev writev readv size > 128k ...passed 00:07:30.214 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:30.214 Test: blockdev comparev and writev ...passed 00:07:30.214 Test: blockdev nvme passthru rw ...[2024-11-20 23:56:20.434055] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:30.214 separate metadata which is not supported yet. 00:07:30.214 passed 00:07:30.214 Test: blockdev nvme passthru vendor specific ...passed 00:07:30.214 Test: blockdev nvme admin passthru ...[2024-11-20 23:56:20.434377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:30.214 [2024-11-20 23:56:20.434414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:30.214 passed 00:07:30.214 Test: blockdev copy ...passed 00:07:30.214 00:07:30.214 Run Summary: Type Total Ran Passed Failed Inactive 00:07:30.214 suites 7 7 n/a 0 0 00:07:30.214 tests 161 161 161 0 0 00:07:30.214 asserts 1025 1025 1025 0 n/a 00:07:30.214 00:07:30.214 Elapsed time = 0.342 seconds 00:07:30.214 0 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73307 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73307 ']' 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73307 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73307 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73307' 00:07:30.214 killing process with pid 73307 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73307 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73307 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:30.214 00:07:30.214 real 0m1.260s 00:07:30.214 user 0m3.205s 00:07:30.214 sys 0m0.239s 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.214 23:56:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:30.214 ************************************ 00:07:30.214 END TEST bdev_bounds 00:07:30.214 ************************************ 00:07:30.473 23:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:30.473 23:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:30.473 23:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.474 23:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.474 ************************************ 00:07:30.474 START TEST bdev_nbd 00:07:30.474 ************************************ 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73357 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73357 /var/tmp/spdk-nbd.sock 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73357 ']' 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:30.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.474 23:56:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:30.474 [2024-11-20 23:56:20.735319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:30.474 [2024-11-20 23:56:20.735548] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.474 [2024-11-20 23:56:20.869936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.733 [2024-11-20 23:56:20.898364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:31.300 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:31.559 1+0 records in 00:07:31.559 1+0 records out 00:07:31.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436419 s, 9.4 MB/s 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:31.559 23:56:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:31.817 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:31.817 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:31.818 1+0 records in 00:07:31.818 1+0 records out 00:07:31.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410045 s, 10.0 MB/s 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:31.818 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:32.076 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:32.076 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:32.076 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:32.076 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:32.077 1+0 records in 00:07:32.077 1+0 records out 00:07:32.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307589 s, 13.3 MB/s 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:32.077 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:32.335 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:32.335 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:32.335 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:32.335 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:32.335 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:32.335 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:32.336 1+0 records in 00:07:32.336 1+0 records out 00:07:32.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317661 s, 12.9 MB/s 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:32.336 1+0 records in 00:07:32.336 1+0 records out 00:07:32.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445478 s, 9.2 MB/s 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:32.336 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:32.595 1+0 records in 00:07:32.595 1+0 records out 00:07:32.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374968 s, 10.9 MB/s 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:32.595 23:56:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:32.855 1+0 records in 00:07:32.855 1+0 records out 00:07:32.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335531 s, 12.2 MB/s 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:32.855 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.114 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:33.114 { 00:07:33.114 "nbd_device": "/dev/nbd0", 00:07:33.114 "bdev_name": "Nvme0n1" 00:07:33.114 }, 00:07:33.114 { 00:07:33.114 "nbd_device": "/dev/nbd1", 00:07:33.114 "bdev_name": "Nvme1n1p1" 00:07:33.114 }, 00:07:33.114 { 00:07:33.114 "nbd_device": "/dev/nbd2", 00:07:33.114 "bdev_name": "Nvme1n1p2" 00:07:33.114 }, 00:07:33.114 { 00:07:33.114 "nbd_device": "/dev/nbd3", 00:07:33.114 "bdev_name": "Nvme2n1" 00:07:33.114 }, 00:07:33.114 { 00:07:33.114 "nbd_device": "/dev/nbd4", 00:07:33.114 "bdev_name": "Nvme2n2" 00:07:33.114 }, 00:07:33.114 { 00:07:33.114 "nbd_device": "/dev/nbd5", 00:07:33.114 "bdev_name": "Nvme2n3" 00:07:33.114 }, 00:07:33.114 { 00:07:33.114 "nbd_device": "/dev/nbd6", 00:07:33.114 "bdev_name": "Nvme3n1" 00:07:33.114 } 00:07:33.114 ]' 00:07:33.114 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:33.114 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:33.114 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:33.114 { 00:07:33.114 "nbd_device": "/dev/nbd0", 00:07:33.114 "bdev_name": "Nvme0n1" 00:07:33.114 }, 00:07:33.114 { 00:07:33.114 "nbd_device": "/dev/nbd1", 00:07:33.114 "bdev_name": "Nvme1n1p1" 00:07:33.114 }, 00:07:33.114 { 00:07:33.114 "nbd_device": "/dev/nbd2", 00:07:33.114 "bdev_name": "Nvme1n1p2" 00:07:33.114 }, 00:07:33.114 { 00:07:33.114 "nbd_device": "/dev/nbd3", 00:07:33.114 "bdev_name": "Nvme2n1" 00:07:33.114 }, 00:07:33.114 { 00:07:33.114 "nbd_device": "/dev/nbd4", 00:07:33.115 "bdev_name": "Nvme2n2" 00:07:33.115 }, 00:07:33.115 { 00:07:33.115 "nbd_device": "/dev/nbd5", 00:07:33.115 "bdev_name": "Nvme2n3" 00:07:33.115 }, 00:07:33.115 { 00:07:33.115 "nbd_device": "/dev/nbd6", 00:07:33.115 "bdev_name": "Nvme3n1" 00:07:33.115 } 00:07:33.115 ]' 00:07:33.115 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:33.115 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.115 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:33.115 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:33.115 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:33.115 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.115 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:33.374 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:33.374 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:33.374 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:33.374 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.374 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.374 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:33.374 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:33.374 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.374 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.374 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:33.632 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:33.632 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:33.632 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:33.632 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.632 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.632 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:33.632 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:33.632 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.632 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.632 23:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.891 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:34.149 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:34.149 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:34.149 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:34.149 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.149 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.149 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:34.149 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:34.149 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.149 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.149 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:34.407 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:34.407 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:34.407 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:34.407 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.407 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.407 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:34.407 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:34.407 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.407 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.407 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:34.665 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:34.665 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:34.665 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:34.665 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.665 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.665 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:34.665 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:34.665 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.665 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.665 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.665 23:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:34.923 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:34.923 /dev/nbd0 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.181 1+0 records in 00:07:35.181 1+0 records out 00:07:35.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380943 s, 10.8 MB/s 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:35.181 /dev/nbd1 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.181 1+0 records in 00:07:35.181 1+0 records out 00:07:35.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476513 s, 8.6 MB/s 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:35.181 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:35.441 /dev/nbd10 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.441 1+0 records in 00:07:35.441 1+0 records out 00:07:35.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027448 s, 14.9 MB/s 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:35.441 23:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:35.442 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:35.442 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:35.442 23:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:35.755 /dev/nbd11 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.755 1+0 records in 00:07:35.755 1+0 records out 00:07:35.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435132 s, 9.4 MB/s 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:35.755 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:36.014 /dev/nbd12 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.014 1+0 records in 00:07:36.014 1+0 records out 00:07:36.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564669 s, 7.3 MB/s 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:36.014 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:36.272 /dev/nbd13 00:07:36.272 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:36.272 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:36.272 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:36.272 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:36.272 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:36.272 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:36.272 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:36.272 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:36.272 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:36.272 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:36.272 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.272 1+0 records in 00:07:36.272 1+0 records out 00:07:36.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056682 s, 7.2 MB/s 00:07:36.273 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.273 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:36.273 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.273 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:36.273 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:36.273 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.273 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:36.273 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:36.531 /dev/nbd14 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.531 1+0 records in 00:07:36.531 1+0 records out 00:07:36.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433229 s, 9.5 MB/s 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:36.531 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:36.531 { 00:07:36.531 "nbd_device": "/dev/nbd0", 00:07:36.531 "bdev_name": "Nvme0n1" 00:07:36.531 }, 00:07:36.531 { 00:07:36.531 "nbd_device": "/dev/nbd1", 00:07:36.531 "bdev_name": "Nvme1n1p1" 00:07:36.531 }, 00:07:36.531 { 00:07:36.531 "nbd_device": "/dev/nbd10", 00:07:36.531 "bdev_name": "Nvme1n1p2" 00:07:36.531 }, 00:07:36.531 { 00:07:36.531 "nbd_device": "/dev/nbd11", 00:07:36.531 "bdev_name": "Nvme2n1" 00:07:36.531 }, 00:07:36.531 { 00:07:36.531 "nbd_device": "/dev/nbd12", 00:07:36.531 "bdev_name": "Nvme2n2" 00:07:36.531 }, 00:07:36.531 { 00:07:36.531 "nbd_device": "/dev/nbd13", 00:07:36.531 "bdev_name": "Nvme2n3" 00:07:36.531 }, 00:07:36.531 { 00:07:36.531 "nbd_device": "/dev/nbd14", 00:07:36.532 "bdev_name": "Nvme3n1" 00:07:36.532 } 00:07:36.532 ]' 00:07:36.532 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:36.532 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:36.532 { 00:07:36.532 "nbd_device": "/dev/nbd0", 00:07:36.532 "bdev_name": "Nvme0n1" 00:07:36.532 }, 00:07:36.532 { 00:07:36.532 "nbd_device": "/dev/nbd1", 00:07:36.532 "bdev_name": "Nvme1n1p1" 00:07:36.532 }, 00:07:36.532 { 00:07:36.532 "nbd_device": "/dev/nbd10", 00:07:36.532 "bdev_name": "Nvme1n1p2" 00:07:36.532 }, 00:07:36.532 { 00:07:36.532 "nbd_device": "/dev/nbd11", 00:07:36.532 "bdev_name": "Nvme2n1" 00:07:36.532 }, 00:07:36.532 { 00:07:36.532 "nbd_device": "/dev/nbd12", 00:07:36.532 "bdev_name": "Nvme2n2" 00:07:36.532 }, 00:07:36.532 { 00:07:36.532 "nbd_device": "/dev/nbd13", 00:07:36.532 "bdev_name": "Nvme2n3" 00:07:36.532 }, 00:07:36.532 { 00:07:36.532 "nbd_device": "/dev/nbd14", 00:07:36.532 "bdev_name": "Nvme3n1" 00:07:36.532 } 00:07:36.532 ]' 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:36.790 /dev/nbd1 00:07:36.790 /dev/nbd10 00:07:36.790 /dev/nbd11 00:07:36.790 /dev/nbd12 00:07:36.790 /dev/nbd13 00:07:36.790 /dev/nbd14' 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:36.790 /dev/nbd1 00:07:36.790 /dev/nbd10 00:07:36.790 /dev/nbd11 00:07:36.790 /dev/nbd12 00:07:36.790 /dev/nbd13 00:07:36.790 /dev/nbd14' 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:36.790 256+0 records in 00:07:36.790 256+0 records out 00:07:36.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00773804 s, 136 MB/s 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:36.790 23:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:36.790 256+0 records in 00:07:36.790 256+0 records out 00:07:36.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0602125 s, 17.4 MB/s 00:07:36.790 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:36.790 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:36.790 256+0 records in 00:07:36.790 256+0 records out 00:07:36.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0621867 s, 16.9 MB/s 00:07:36.790 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:36.790 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:36.790 256+0 records in 00:07:36.790 256+0 records out 00:07:36.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0821608 s, 12.8 MB/s 00:07:36.790 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:36.790 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:37.048 256+0 records in 00:07:37.048 256+0 records out 00:07:37.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0738996 s, 14.2 MB/s 00:07:37.048 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.048 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:37.048 256+0 records in 00:07:37.048 256+0 records out 00:07:37.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0736708 s, 14.2 MB/s 00:07:37.048 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.048 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:37.048 256+0 records in 00:07:37.048 256+0 records out 00:07:37.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0787118 s, 13.3 MB/s 00:07:37.048 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.048 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:37.307 256+0 records in 00:07:37.307 256+0 records out 00:07:37.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0796975 s, 13.2 MB/s 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.307 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.566 23:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:37.824 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:37.824 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:37.824 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:37.824 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.824 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.824 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:37.824 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.824 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.824 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.824 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:38.083 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:38.083 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:38.083 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:38.083 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.083 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.083 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:38.083 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.083 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.083 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.083 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:38.340 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:38.340 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:38.340 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:38.340 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.340 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.340 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:38.340 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.340 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.340 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.340 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:38.598 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:38.598 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:38.598 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:38.598 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.598 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.598 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:38.598 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.598 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.598 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.598 23:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:38.598 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:38.598 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:38.598 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:38.598 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.598 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.598 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:38.598 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.598 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.598 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:38.598 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.598 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.856 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:38.856 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:38.856 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.856 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:38.856 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:38.856 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.856 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:38.856 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:38.856 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:38.856 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:38.856 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:38.856 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:38.856 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:38.857 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.857 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:38.857 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:39.115 malloc_lvol_verify 00:07:39.115 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:39.373 7caa8522-ac38-4538-a2f2-47a536bbb6be 00:07:39.373 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:39.373 b0c50d4d-8d93-45f7-8b9a-d7c42328bb24 00:07:39.373 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:39.631 /dev/nbd0 00:07:39.631 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:39.631 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:39.631 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:39.631 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:39.631 23:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:39.631 mke2fs 1.47.0 (5-Feb-2023) 00:07:39.631 Discarding device blocks: 0/4096 done 00:07:39.631 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:39.631 00:07:39.631 Allocating group tables: 0/1 done 00:07:39.631 Writing inode tables: 0/1 done 00:07:39.631 Creating journal (1024 blocks): done 00:07:39.631 Writing superblocks and filesystem accounting information: 0/1 done 00:07:39.631 00:07:39.631 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:39.631 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.631 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:39.631 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:39.631 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:39.631 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.631 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73357 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73357 ']' 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73357 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73357 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.890 killing process with pid 73357 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73357' 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73357 00:07:39.890 23:56:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73357 00:07:40.148 23:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:40.148 00:07:40.148 real 0m9.701s 00:07:40.148 user 0m14.193s 00:07:40.148 sys 0m3.325s 00:07:40.148 23:56:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.148 23:56:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:40.148 ************************************ 00:07:40.148 END TEST bdev_nbd 00:07:40.148 ************************************ 00:07:40.148 23:56:30 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:40.148 skipping fio tests on NVMe due to multi-ns failures. 00:07:40.148 23:56:30 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:40.148 23:56:30 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:40.148 23:56:30 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:40.148 23:56:30 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:40.148 23:56:30 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:40.148 23:56:30 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:40.148 23:56:30 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.148 23:56:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.148 ************************************ 00:07:40.148 START TEST bdev_verify 00:07:40.148 ************************************ 00:07:40.148 23:56:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:40.148 [2024-11-20 23:56:30.464927] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:40.148 [2024-11-20 23:56:30.465013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73758 ] 00:07:40.407 [2024-11-20 23:56:30.595113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.407 [2024-11-20 23:56:30.625521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.407 [2024-11-20 23:56:30.625651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.665 Running I/O for 5 seconds... 00:07:42.974 22080.00 IOPS, 86.25 MiB/s [2024-11-20T23:56:34.330Z] 21952.00 IOPS, 85.75 MiB/s [2024-11-20T23:56:35.703Z] 21461.33 IOPS, 83.83 MiB/s [2024-11-20T23:56:36.271Z] 21984.00 IOPS, 85.88 MiB/s [2024-11-20T23:56:36.271Z] 21529.60 IOPS, 84.10 MiB/s 00:07:45.850 Latency(us) 00:07:45.850 [2024-11-20T23:56:36.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.850 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0x0 length 0xbd0bd 00:07:45.850 Nvme0n1 : 5.08 1562.16 6.10 0.00 0.00 81742.28 13107.20 80659.69 00:07:45.850 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:45.850 Nvme0n1 : 5.08 1474.99 5.76 0.00 0.00 86331.97 7259.37 81062.99 00:07:45.850 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0x0 length 0x4ff80 00:07:45.850 Nvme1n1p1 : 5.08 1561.70 6.10 0.00 0.00 81640.04 14720.39 75416.81 00:07:45.850 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:45.850 Nvme1n1p1 : 5.09 1483.11 5.79 0.00 0.00 85885.75 10939.47 74206.92 00:07:45.850 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0x0 length 0x4ff7f 00:07:45.850 Nvme1n1p2 : 5.08 1560.72 6.10 0.00 0.00 81503.52 16434.41 70577.23 00:07:45.850 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:45.850 Nvme1n1p2 : 5.09 1482.68 5.79 0.00 0.00 85740.67 11090.71 71787.13 00:07:45.850 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0x0 length 0x80000 00:07:45.850 Nvme2n1 : 5.09 1560.28 6.09 0.00 0.00 81374.38 16837.71 72190.42 00:07:45.850 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0x80000 length 0x80000 00:07:45.850 Nvme2n1 : 5.10 1481.80 5.79 0.00 0.00 85624.32 13208.02 69770.63 00:07:45.850 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0x0 length 0x80000 00:07:45.850 Nvme2n2 : 5.09 1559.88 6.09 0.00 0.00 81234.76 16232.76 75013.51 00:07:45.850 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0x80000 length 0x80000 00:07:45.850 Nvme2n2 : 5.10 1481.40 5.79 0.00 0.00 85467.94 13611.32 72593.72 00:07:45.850 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0x0 length 0x80000 00:07:45.850 Nvme2n3 : 5.09 1559.43 6.09 0.00 0.00 81094.75 16333.59 76223.41 00:07:45.850 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0x80000 length 0x80000 00:07:45.850 Nvme2n3 : 5.10 1481.01 5.79 0.00 0.00 85318.79 13913.80 75013.51 00:07:45.850 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0x0 length 0x20000 00:07:45.850 Nvme3n1 : 5.09 1558.98 6.09 0.00 0.00 80953.08 10536.17 78239.90 00:07:45.850 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:45.850 Verification LBA range: start 0x20000 length 0x20000 00:07:45.850 Nvme3n1 : 5.10 1480.63 5.78 0.00 0.00 85174.32 9124.63 76626.71 00:07:45.850 [2024-11-20T23:56:36.271Z] =================================================================================================================== 00:07:45.850 [2024-11-20T23:56:36.271Z] Total : 21288.77 83.16 0.00 0.00 83451.35 7259.37 81062.99 00:07:46.416 ************************************ 00:07:46.416 END TEST bdev_verify 00:07:46.416 ************************************ 00:07:46.416 00:07:46.416 real 0m6.355s 00:07:46.416 user 0m12.066s 00:07:46.416 sys 0m0.178s 00:07:46.416 23:56:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.416 23:56:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:46.416 23:56:36 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:46.416 23:56:36 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:46.416 23:56:36 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.416 23:56:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.416 ************************************ 00:07:46.416 START TEST bdev_verify_big_io 00:07:46.416 ************************************ 00:07:46.416 23:56:36 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:46.674 [2024-11-20 23:56:36.874914] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:46.674 [2024-11-20 23:56:36.875021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73845 ] 00:07:46.674 [2024-11-20 23:56:37.010369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:46.674 [2024-11-20 23:56:37.043627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.674 [2024-11-20 23:56:37.043636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.242 Running I/O for 5 seconds... 00:07:51.739 16.00 IOPS, 1.00 MiB/s [2024-11-20T23:56:43.536Z] 1413.50 IOPS, 88.34 MiB/s [2024-11-20T23:56:43.795Z] 1996.00 IOPS, 124.75 MiB/s 00:07:53.374 Latency(us) 00:07:53.374 [2024-11-20T23:56:43.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.374 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0x0 length 0xbd0b 00:07:53.374 Nvme0n1 : 5.87 108.26 6.77 0.00 0.00 1114302.53 19156.68 1419610.58 00:07:53.374 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:53.374 Nvme0n1 : 5.79 103.21 6.45 0.00 0.00 1177371.86 18652.55 1432516.14 00:07:53.374 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0x0 length 0x4ff8 00:07:53.374 Nvme1n1p1 : 5.73 111.68 6.98 0.00 0.00 1066450.00 97194.93 1219574.55 00:07:53.374 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:53.374 Nvme1n1p1 : 5.80 110.43 6.90 0.00 0.00 1077807.58 105664.20 1213121.77 00:07:53.374 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0x0 length 0x4ff7 00:07:53.374 Nvme1n1p2 : 5.96 113.19 7.07 0.00 0.00 999165.07 134701.69 1090519.04 00:07:53.374 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:53.374 Nvme1n1p2 : 5.91 111.81 6.99 0.00 0.00 1020257.25 113730.17 1013085.74 00:07:53.374 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0x0 length 0x8000 00:07:53.374 Nvme2n1 : 6.08 122.38 7.65 0.00 0.00 906635.00 56461.78 1084066.26 00:07:53.374 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0x8000 length 0x8000 00:07:53.374 Nvme2n1 : 6.02 116.98 7.31 0.00 0.00 945636.36 88322.36 967916.31 00:07:53.374 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0x0 length 0x8000 00:07:53.374 Nvme2n2 : 6.08 126.29 7.89 0.00 0.00 852293.84 57268.38 1103424.59 00:07:53.374 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0x8000 length 0x8000 00:07:53.374 Nvme2n2 : 6.13 124.59 7.79 0.00 0.00 862305.82 39119.95 1167952.34 00:07:53.374 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0x0 length 0x8000 00:07:53.374 Nvme2n3 : 6.14 135.41 8.46 0.00 0.00 767911.70 27222.65 1122782.92 00:07:53.374 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0x8000 length 0x8000 00:07:53.374 Nvme2n3 : 6.16 125.16 7.82 0.00 0.00 828426.50 25609.45 1871304.86 00:07:53.374 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0x0 length 0x2000 00:07:53.374 Nvme3n1 : 6.29 168.43 10.53 0.00 0.00 599150.21 124.46 1142141.24 00:07:53.374 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.374 Verification LBA range: start 0x2000 length 0x2000 00:07:53.374 Nvme3n1 : 6.29 155.11 9.69 0.00 0.00 648957.16 740.43 1910021.51 00:07:53.374 [2024-11-20T23:56:43.795Z] =================================================================================================================== 00:07:53.374 [2024-11-20T23:56:43.795Z] Total : 1732.95 108.31 0.00 0.00 891592.38 124.46 1910021.51 00:07:54.306 00:07:54.306 real 0m7.802s 00:07:54.306 user 0m14.947s 00:07:54.306 sys 0m0.187s 00:07:54.307 23:56:44 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.307 ************************************ 00:07:54.307 END TEST bdev_verify_big_io 00:07:54.307 ************************************ 00:07:54.307 23:56:44 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:54.307 23:56:44 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:54.307 23:56:44 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:54.307 23:56:44 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.307 23:56:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:54.307 ************************************ 00:07:54.307 START TEST bdev_write_zeroes 00:07:54.307 ************************************ 00:07:54.307 23:56:44 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:54.307 [2024-11-20 23:56:44.724564] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:54.307 [2024-11-20 23:56:44.724801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73948 ] 00:07:54.564 [2024-11-20 23:56:44.859357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.564 [2024-11-20 23:56:44.889930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.131 Running I/O for 1 seconds... 00:07:56.065 70336.00 IOPS, 274.75 MiB/s 00:07:56.065 Latency(us) 00:07:56.065 [2024-11-20T23:56:46.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.065 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.065 Nvme0n1 : 1.02 10001.64 39.07 0.00 0.00 12772.63 8872.57 23996.26 00:07:56.065 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.065 Nvme1n1p1 : 1.03 9989.27 39.02 0.00 0.00 12770.87 9023.80 23391.31 00:07:56.065 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.065 Nvme1n1p2 : 1.03 9977.13 38.97 0.00 0.00 12761.58 9023.80 22685.54 00:07:56.065 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.065 Nvme2n1 : 1.03 9965.91 38.93 0.00 0.00 12760.55 8922.98 22080.59 00:07:56.065 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.065 Nvme2n2 : 1.03 9954.64 38.89 0.00 0.00 12741.70 9074.22 21677.29 00:07:56.065 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.065 Nvme2n3 : 1.03 9943.48 38.84 0.00 0.00 12728.64 8922.98 22181.42 00:07:56.065 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.065 Nvme3n1 : 1.03 9932.28 38.80 0.00 0.00 12706.30 8166.79 23391.31 00:07:56.065 [2024-11-20T23:56:46.486Z] =================================================================================================================== 00:07:56.065 [2024-11-20T23:56:46.486Z] Total : 69764.34 272.52 0.00 0.00 12748.89 8166.79 23996.26 00:07:56.065 00:07:56.065 real 0m1.816s 00:07:56.065 user 0m1.543s 00:07:56.065 sys 0m0.163s 00:07:56.065 ************************************ 00:07:56.065 END TEST bdev_write_zeroes 00:07:56.065 ************************************ 00:07:56.065 23:56:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.065 23:56:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:56.323 23:56:46 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:56.323 23:56:46 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:56.323 23:56:46 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.323 23:56:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.323 ************************************ 00:07:56.323 START TEST bdev_json_nonenclosed 00:07:56.323 ************************************ 00:07:56.323 23:56:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:56.323 [2024-11-20 23:56:46.594948] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:56.323 [2024-11-20 23:56:46.595145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73985 ] 00:07:56.323 [2024-11-20 23:56:46.720079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.581 [2024-11-20 23:56:46.749963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.581 [2024-11-20 23:56:46.750037] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:56.581 [2024-11-20 23:56:46.750052] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:56.581 [2024-11-20 23:56:46.750060] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.581 00:07:56.582 real 0m0.271s 00:07:56.582 user 0m0.105s 00:07:56.582 sys 0m0.063s 00:07:56.582 ************************************ 00:07:56.582 END TEST bdev_json_nonenclosed 00:07:56.582 ************************************ 00:07:56.582 23:56:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.582 23:56:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:56.582 23:56:46 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:56.582 23:56:46 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:56.582 23:56:46 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.582 23:56:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.582 ************************************ 00:07:56.582 START TEST bdev_json_nonarray 00:07:56.582 ************************************ 00:07:56.582 23:56:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:56.582 [2024-11-20 23:56:46.917418] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:56.582 [2024-11-20 23:56:46.917594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74016 ] 00:07:56.840 [2024-11-20 23:56:47.046501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.840 [2024-11-20 23:56:47.075609] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.840 [2024-11-20 23:56:47.075837] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:56.840 [2024-11-20 23:56:47.075861] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:56.840 [2024-11-20 23:56:47.075875] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.840 00:07:56.840 real 0m0.273s 00:07:56.840 user 0m0.095s 00:07:56.840 sys 0m0.075s 00:07:56.840 ************************************ 00:07:56.840 END TEST bdev_json_nonarray 00:07:56.840 ************************************ 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:56.840 23:56:47 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:56.840 23:56:47 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:56.840 23:56:47 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:56.840 23:56:47 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.840 23:56:47 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.840 23:56:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.840 ************************************ 00:07:56.840 START TEST bdev_gpt_uuid 00:07:56.840 ************************************ 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74036 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74036 00:07:56.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74036 ']' 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.840 23:56:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:57.098 [2024-11-20 23:56:47.270578] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:57.098 [2024-11-20 23:56:47.270701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74036 ] 00:07:57.098 [2024-11-20 23:56:47.397603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.098 [2024-11-20 23:56:47.427031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.664 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.664 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:57.664 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:57.664 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.664 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:58.231 Some configs were skipped because the RPC state that can call them passed over. 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:58.231 { 00:07:58.231 "name": "Nvme1n1p1", 00:07:58.231 "aliases": [ 00:07:58.231 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:58.231 ], 00:07:58.231 "product_name": "GPT Disk", 00:07:58.231 "block_size": 4096, 00:07:58.231 "num_blocks": 655104, 00:07:58.231 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:58.231 "assigned_rate_limits": { 00:07:58.231 "rw_ios_per_sec": 0, 00:07:58.231 "rw_mbytes_per_sec": 0, 00:07:58.231 "r_mbytes_per_sec": 0, 00:07:58.231 "w_mbytes_per_sec": 0 00:07:58.231 }, 00:07:58.231 "claimed": false, 00:07:58.231 "zoned": false, 00:07:58.231 "supported_io_types": { 00:07:58.231 "read": true, 00:07:58.231 "write": true, 00:07:58.231 "unmap": true, 00:07:58.231 "flush": true, 00:07:58.231 "reset": true, 00:07:58.231 "nvme_admin": false, 00:07:58.231 "nvme_io": false, 00:07:58.231 "nvme_io_md": false, 00:07:58.231 "write_zeroes": true, 00:07:58.231 "zcopy": false, 00:07:58.231 "get_zone_info": false, 00:07:58.231 "zone_management": false, 00:07:58.231 "zone_append": false, 00:07:58.231 "compare": true, 00:07:58.231 "compare_and_write": false, 00:07:58.231 "abort": true, 00:07:58.231 "seek_hole": false, 00:07:58.231 "seek_data": false, 00:07:58.231 "copy": true, 00:07:58.231 "nvme_iov_md": false 00:07:58.231 }, 00:07:58.231 "driver_specific": { 00:07:58.231 "gpt": { 00:07:58.231 "base_bdev": "Nvme1n1", 00:07:58.231 "offset_blocks": 256, 00:07:58.231 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:58.231 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:58.231 "partition_name": "SPDK_TEST_first" 00:07:58.231 } 00:07:58.231 } 00:07:58.231 } 00:07:58.231 ]' 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:58.231 { 00:07:58.231 "name": "Nvme1n1p2", 00:07:58.231 "aliases": [ 00:07:58.231 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:58.231 ], 00:07:58.231 "product_name": "GPT Disk", 00:07:58.231 "block_size": 4096, 00:07:58.231 "num_blocks": 655103, 00:07:58.231 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:58.231 "assigned_rate_limits": { 00:07:58.231 "rw_ios_per_sec": 0, 00:07:58.231 "rw_mbytes_per_sec": 0, 00:07:58.231 "r_mbytes_per_sec": 0, 00:07:58.231 "w_mbytes_per_sec": 0 00:07:58.231 }, 00:07:58.231 "claimed": false, 00:07:58.231 "zoned": false, 00:07:58.231 "supported_io_types": { 00:07:58.231 "read": true, 00:07:58.231 "write": true, 00:07:58.231 "unmap": true, 00:07:58.231 "flush": true, 00:07:58.231 "reset": true, 00:07:58.231 "nvme_admin": false, 00:07:58.231 "nvme_io": false, 00:07:58.231 "nvme_io_md": false, 00:07:58.231 "write_zeroes": true, 00:07:58.231 "zcopy": false, 00:07:58.231 "get_zone_info": false, 00:07:58.231 "zone_management": false, 00:07:58.231 "zone_append": false, 00:07:58.231 "compare": true, 00:07:58.231 "compare_and_write": false, 00:07:58.231 "abort": true, 00:07:58.231 "seek_hole": false, 00:07:58.231 "seek_data": false, 00:07:58.231 "copy": true, 00:07:58.231 "nvme_iov_md": false 00:07:58.231 }, 00:07:58.231 "driver_specific": { 00:07:58.231 "gpt": { 00:07:58.231 "base_bdev": "Nvme1n1", 00:07:58.231 "offset_blocks": 655360, 00:07:58.231 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:58.231 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:58.231 "partition_name": "SPDK_TEST_second" 00:07:58.231 } 00:07:58.231 } 00:07:58.231 } 00:07:58.231 ]' 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74036 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74036 ']' 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74036 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74036 00:07:58.231 killing process with pid 74036 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74036' 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74036 00:07:58.231 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74036 00:07:58.489 00:07:58.489 real 0m1.652s 00:07:58.489 user 0m1.797s 00:07:58.489 sys 0m0.298s 00:07:58.489 ************************************ 00:07:58.489 END TEST bdev_gpt_uuid 00:07:58.489 ************************************ 00:07:58.489 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.489 23:56:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:58.489 23:56:48 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:58.489 23:56:48 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:58.489 23:56:48 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:58.489 23:56:48 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:58.489 23:56:48 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:58.489 23:56:48 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:58.489 23:56:48 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:58.489 23:56:48 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:58.489 23:56:48 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:59.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:59.055 Waiting for block devices as requested 00:07:59.055 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:59.055 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:59.313 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:59.313 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:04.577 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:04.577 23:56:54 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:04.577 23:56:54 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:04.577 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:04.577 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:04.577 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:04.577 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:04.577 23:56:54 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:04.577 ************************************ 00:08:04.577 END TEST blockdev_nvme_gpt 00:08:04.577 ************************************ 00:08:04.577 00:08:04.577 real 0m47.102s 00:08:04.577 user 1m0.131s 00:08:04.577 sys 0m7.079s 00:08:04.577 23:56:54 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.577 23:56:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:04.835 23:56:55 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:04.835 23:56:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.835 23:56:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.835 23:56:55 -- common/autotest_common.sh@10 -- # set +x 00:08:04.835 ************************************ 00:08:04.835 START TEST nvme 00:08:04.835 ************************************ 00:08:04.835 23:56:55 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:04.835 * Looking for test storage... 00:08:04.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:04.835 23:56:55 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:04.835 23:56:55 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:08:04.835 23:56:55 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:04.835 23:56:55 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:04.835 23:56:55 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.835 23:56:55 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.835 23:56:55 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.835 23:56:55 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.835 23:56:55 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.835 23:56:55 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.835 23:56:55 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.835 23:56:55 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.835 23:56:55 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.835 23:56:55 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.835 23:56:55 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.835 23:56:55 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:04.835 23:56:55 nvme -- scripts/common.sh@345 -- # : 1 00:08:04.835 23:56:55 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.835 23:56:55 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.835 23:56:55 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:04.835 23:56:55 nvme -- scripts/common.sh@353 -- # local d=1 00:08:04.835 23:56:55 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.835 23:56:55 nvme -- scripts/common.sh@355 -- # echo 1 00:08:04.835 23:56:55 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.835 23:56:55 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:04.835 23:56:55 nvme -- scripts/common.sh@353 -- # local d=2 00:08:04.835 23:56:55 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.835 23:56:55 nvme -- scripts/common.sh@355 -- # echo 2 00:08:04.835 23:56:55 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.835 23:56:55 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.835 23:56:55 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.835 23:56:55 nvme -- scripts/common.sh@368 -- # return 0 00:08:04.835 23:56:55 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.835 23:56:55 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:04.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.835 --rc genhtml_branch_coverage=1 00:08:04.835 --rc genhtml_function_coverage=1 00:08:04.835 --rc genhtml_legend=1 00:08:04.835 --rc geninfo_all_blocks=1 00:08:04.835 --rc geninfo_unexecuted_blocks=1 00:08:04.835 00:08:04.835 ' 00:08:04.835 23:56:55 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:04.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.835 --rc genhtml_branch_coverage=1 00:08:04.835 --rc genhtml_function_coverage=1 00:08:04.835 --rc genhtml_legend=1 00:08:04.835 --rc geninfo_all_blocks=1 00:08:04.835 --rc geninfo_unexecuted_blocks=1 00:08:04.835 00:08:04.835 ' 00:08:04.835 23:56:55 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:04.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.835 --rc genhtml_branch_coverage=1 00:08:04.835 --rc genhtml_function_coverage=1 00:08:04.835 --rc genhtml_legend=1 00:08:04.835 --rc geninfo_all_blocks=1 00:08:04.835 --rc geninfo_unexecuted_blocks=1 00:08:04.835 00:08:04.835 ' 00:08:04.835 23:56:55 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:04.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.835 --rc genhtml_branch_coverage=1 00:08:04.835 --rc genhtml_function_coverage=1 00:08:04.835 --rc genhtml_legend=1 00:08:04.835 --rc geninfo_all_blocks=1 00:08:04.835 --rc geninfo_unexecuted_blocks=1 00:08:04.835 00:08:04.835 ' 00:08:04.835 23:56:55 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:05.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:05.967 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:05.967 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:05.967 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:05.967 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:05.967 23:56:56 nvme -- nvme/nvme.sh@79 -- # uname 00:08:05.967 23:56:56 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:05.967 23:56:56 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:05.967 23:56:56 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:05.967 23:56:56 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:05.967 23:56:56 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:05.967 23:56:56 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:05.967 23:56:56 nvme -- common/autotest_common.sh@1071 -- # stubpid=74660 00:08:05.967 23:56:56 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:05.967 23:56:56 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:05.967 Waiting for stub to ready for secondary processes... 00:08:05.967 23:56:56 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:05.967 23:56:56 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/74660 ]] 00:08:05.967 23:56:56 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:05.967 [2024-11-20 23:56:56.266097] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:05.967 [2024-11-20 23:56:56.266417] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:06.899 [2024-11-20 23:56:57.032772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:06.899 [2024-11-20 23:56:57.054106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.899 [2024-11-20 23:56:57.054312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.899 [2024-11-20 23:56:57.054374] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.899 [2024-11-20 23:56:57.064598] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:06.899 [2024-11-20 23:56:57.064638] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:06.899 [2024-11-20 23:56:57.074970] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:06.899 [2024-11-20 23:56:57.075605] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:06.899 [2024-11-20 23:56:57.076842] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:06.899 [2024-11-20 23:56:57.077063] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:06.899 [2024-11-20 23:56:57.077154] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:06.899 [2024-11-20 23:56:57.078068] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:06.899 [2024-11-20 23:56:57.078447] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:06.899 [2024-11-20 23:56:57.078613] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:06.899 [2024-11-20 23:56:57.080054] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:06.899 [2024-11-20 23:56:57.080245] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:06.899 [2024-11-20 23:56:57.080326] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:06.899 [2024-11-20 23:56:57.080380] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:06.899 [2024-11-20 23:56:57.080435] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:06.899 done. 00:08:06.899 23:56:57 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:06.899 23:56:57 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:06.899 23:56:57 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:06.899 23:56:57 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:06.899 23:56:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.899 23:56:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.899 ************************************ 00:08:06.899 START TEST nvme_reset 00:08:06.899 ************************************ 00:08:06.899 23:56:57 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:07.157 Initializing NVMe Controllers 00:08:07.157 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:07.157 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:07.157 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:07.157 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:07.157 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:07.157 ************************************ 00:08:07.157 END TEST nvme_reset 00:08:07.157 ************************************ 00:08:07.157 00:08:07.157 real 0m0.180s 00:08:07.157 user 0m0.056s 00:08:07.157 sys 0m0.078s 00:08:07.157 23:56:57 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.157 23:56:57 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:07.157 23:56:57 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:07.157 23:56:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.157 23:56:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.157 23:56:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.157 ************************************ 00:08:07.157 START TEST nvme_identify 00:08:07.157 ************************************ 00:08:07.157 23:56:57 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:07.157 23:56:57 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:07.157 23:56:57 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:07.157 23:56:57 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:07.157 23:56:57 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:07.157 23:56:57 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:07.157 23:56:57 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:07.157 23:56:57 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:07.157 23:56:57 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:07.157 23:56:57 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:07.157 23:56:57 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:07.157 23:56:57 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:07.157 23:56:57 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:07.417 [2024-11-20 23:56:57.681558] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 74682 terminated unexpected 00:08:07.417 ===================================================== 00:08:07.417 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:07.417 ===================================================== 00:08:07.417 Controller Capabilities/Features 00:08:07.417 ================================ 00:08:07.417 Vendor ID: 1b36 00:08:07.417 Subsystem Vendor ID: 1af4 00:08:07.417 Serial Number: 12343 00:08:07.417 Model Number: QEMU NVMe Ctrl 00:08:07.417 Firmware Version: 8.0.0 00:08:07.417 Recommended Arb Burst: 6 00:08:07.417 IEEE OUI Identifier: 00 54 52 00:08:07.417 Multi-path I/O 00:08:07.417 May have multiple subsystem ports: No 00:08:07.417 May have multiple controllers: Yes 00:08:07.417 Associated with SR-IOV VF: No 00:08:07.417 Max Data Transfer Size: 524288 00:08:07.417 Max Number of Namespaces: 256 00:08:07.417 Max Number of I/O Queues: 64 00:08:07.417 NVMe Specification Version (VS): 1.4 00:08:07.417 NVMe Specification Version (Identify): 1.4 00:08:07.417 Maximum Queue Entries: 2048 00:08:07.417 Contiguous Queues Required: Yes 00:08:07.417 Arbitration Mechanisms Supported 00:08:07.417 Weighted Round Robin: Not Supported 00:08:07.417 Vendor Specific: Not Supported 00:08:07.418 Reset Timeout: 7500 ms 00:08:07.418 Doorbell Stride: 4 bytes 00:08:07.418 NVM Subsystem Reset: Not Supported 00:08:07.418 Command Sets Supported 00:08:07.418 NVM Command Set: Supported 00:08:07.418 Boot Partition: Not Supported 00:08:07.418 Memory Page Size Minimum: 4096 bytes 00:08:07.418 Memory Page Size Maximum: 65536 bytes 00:08:07.418 Persistent Memory Region: Not Supported 00:08:07.418 Optional Asynchronous Events Supported 00:08:07.418 Namespace Attribute Notices: Supported 00:08:07.418 Firmware Activation Notices: Not Supported 00:08:07.418 ANA Change Notices: Not Supported 00:08:07.418 PLE Aggregate Log Change Notices: Not Supported 00:08:07.418 LBA Status Info Alert Notices: Not Supported 00:08:07.418 EGE Aggregate Log Change Notices: Not Supported 00:08:07.418 Normal NVM Subsystem Shutdown event: Not Supported 00:08:07.418 Zone Descriptor Change Notices: Not Supported 00:08:07.418 Discovery Log Change Notices: Not Supported 00:08:07.418 Controller Attributes 00:08:07.418 128-bit Host Identifier: Not Supported 00:08:07.418 Non-Operational Permissive Mode: Not Supported 00:08:07.418 NVM Sets: Not Supported 00:08:07.418 Read Recovery Levels: Not Supported 00:08:07.418 Endurance Groups: Supported 00:08:07.418 Predictable Latency Mode: Not Supported 00:08:07.418 Traffic Based Keep ALive: Not Supported 00:08:07.418 Namespace Granularity: Not Supported 00:08:07.418 SQ Associations: Not Supported 00:08:07.418 UUID List: Not Supported 00:08:07.418 Multi-Domain Subsystem: Not Supported 00:08:07.418 Fixed Capacity Management: Not Supported 00:08:07.418 Variable Capacity Management: Not Supported 00:08:07.418 Delete Endurance Group: Not Supported 00:08:07.418 Delete NVM Set: Not Supported 00:08:07.418 Extended LBA Formats Supported: Supported 00:08:07.418 Flexible Data Placement Supported: Supported 00:08:07.418 00:08:07.418 Controller Memory Buffer Support 00:08:07.418 ================================ 00:08:07.418 Supported: No 00:08:07.418 00:08:07.418 Persistent Memory Region Support 00:08:07.418 ================================ 00:08:07.418 Supported: No 00:08:07.418 00:08:07.418 Admin Command Set Attributes 00:08:07.418 ============================ 00:08:07.418 Security Send/Receive: Not Supported 00:08:07.418 Format NVM: Supported 00:08:07.418 Firmware Activate/Download: Not Supported 00:08:07.418 Namespace Management: Supported 00:08:07.418 Device Self-Test: Not Supported 00:08:07.418 Directives: Supported 00:08:07.418 NVMe-MI: Not Supported 00:08:07.418 Virtualization Management: Not Supported 00:08:07.418 Doorbell Buffer Config: Supported 00:08:07.418 Get LBA Status Capability: Not Supported 00:08:07.418 Command & Feature Lockdown Capability: Not Supported 00:08:07.418 Abort Command Limit: 4 00:08:07.418 Async Event Request Limit: 4 00:08:07.418 Number of Firmware Slots: N/A 00:08:07.418 Firmware Slot 1 Read-Only: N/A 00:08:07.418 Firmware Activation Without Reset: N/A 00:08:07.418 Multiple Update Detection Support: N/A 00:08:07.418 Firmware Update Granularity: No Information Provided 00:08:07.418 Per-Namespace SMART Log: Yes 00:08:07.418 Asymmetric Namespace Access Log Page: Not Supported 00:08:07.418 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:07.418 Command Effects Log Page: Supported 00:08:07.418 Get Log Page Extended Data: Supported 00:08:07.418 Telemetry Log Pages: Not Supported 00:08:07.418 Persistent Event Log Pages: Not Supported 00:08:07.418 Supported Log Pages Log Page: May Support 00:08:07.418 Commands Supported & Effects Log Page: Not Supported 00:08:07.418 Feature Identifiers & Effects Log Page:May Support 00:08:07.418 NVMe-MI Commands & Effects Log Page: May Support 00:08:07.418 Data Area 4 for Telemetry Log: Not Supported 00:08:07.418 Error Log Page Entries Supported: 1 00:08:07.418 Keep Alive: Not Supported 00:08:07.418 00:08:07.418 NVM Command Set Attributes 00:08:07.418 ========================== 00:08:07.418 Submission Queue Entry Size 00:08:07.418 Max: 64 00:08:07.418 Min: 64 00:08:07.418 Completion Queue Entry Size 00:08:07.418 Max: 16 00:08:07.418 Min: 16 00:08:07.418 Number of Namespaces: 256 00:08:07.418 Compare Command: Supported 00:08:07.418 Write Uncorrectable Command: Not Supported 00:08:07.418 Dataset Management Command: Supported 00:08:07.418 Write Zeroes Command: Supported 00:08:07.418 Set Features Save Field: Supported 00:08:07.418 Reservations: Not Supported 00:08:07.418 Timestamp: Supported 00:08:07.418 Copy: Supported 00:08:07.418 Volatile Write Cache: Present 00:08:07.418 Atomic Write Unit (Normal): 1 00:08:07.418 Atomic Write Unit (PFail): 1 00:08:07.418 Atomic Compare & Write Unit: 1 00:08:07.418 Fused Compare & Write: Not Supported 00:08:07.418 Scatter-Gather List 00:08:07.418 SGL Command Set: Supported 00:08:07.418 SGL Keyed: Not Supported 00:08:07.418 SGL Bit Bucket Descriptor: Not Supported 00:08:07.418 SGL Metadata Pointer: Not Supported 00:08:07.418 Oversized SGL: Not Supported 00:08:07.418 SGL Metadata Address: Not Supported 00:08:07.418 SGL Offset: Not Supported 00:08:07.418 Transport SGL Data Block: Not Supported 00:08:07.418 Replay Protected Memory Block: Not Supported 00:08:07.418 00:08:07.418 Firmware Slot Information 00:08:07.418 ========================= 00:08:07.418 Active slot: 1 00:08:07.418 Slot 1 Firmware Revision: 1.0 00:08:07.418 00:08:07.418 00:08:07.418 Commands Supported and Effects 00:08:07.418 ============================== 00:08:07.418 Admin Commands 00:08:07.418 -------------- 00:08:07.418 Delete I/O Submission Queue (00h): Supported 00:08:07.418 Create I/O Submission Queue (01h): Supported 00:08:07.418 Get Log Page (02h): Supported 00:08:07.418 Delete I/O Completion Queue (04h): Supported 00:08:07.418 Create I/O Completion Queue (05h): Supported 00:08:07.418 Identify (06h): Supported 00:08:07.418 Abort (08h): Supported 00:08:07.418 Set Features (09h): Supported 00:08:07.418 Get Features (0Ah): Supported 00:08:07.418 Asynchronous Event Request (0Ch): Supported 00:08:07.418 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:07.418 Directive Send (19h): Supported 00:08:07.418 Directive Receive (1Ah): Supported 00:08:07.418 Virtualization Management (1Ch): Supported 00:08:07.418 Doorbell Buffer Config (7Ch): Supported 00:08:07.418 Format NVM (80h): Supported LBA-Change 00:08:07.418 I/O Commands 00:08:07.418 ------------ 00:08:07.418 Flush (00h): Supported LBA-Change 00:08:07.418 Write (01h): Supported LBA-Change 00:08:07.418 Read (02h): Supported 00:08:07.418 Compare (05h): Supported 00:08:07.418 Write Zeroes (08h): Supported LBA-Change 00:08:07.418 Dataset Management (09h): Supported LBA-Change 00:08:07.419 Unknown (0Ch): Supported 00:08:07.419 Unknown (12h): Supported 00:08:07.419 Copy (19h): Supported LBA-Change 00:08:07.419 Unknown (1Dh): Supported LBA-Change 00:08:07.419 00:08:07.419 Error Log 00:08:07.419 ========= 00:08:07.419 00:08:07.419 Arbitration 00:08:07.419 =========== 00:08:07.419 Arbitration Burst: no limit 00:08:07.419 00:08:07.419 Power Management 00:08:07.419 ================ 00:08:07.419 Number of Power States: 1 00:08:07.419 Current Power State: Power State #0 00:08:07.419 Power State #0: 00:08:07.419 Max Power: 25.00 W 00:08:07.419 Non-Operational State: Operational 00:08:07.419 Entry Latency: 16 microseconds 00:08:07.419 Exit Latency: 4 microseconds 00:08:07.419 Relative Read Throughput: 0 00:08:07.419 Relative Read Latency: 0 00:08:07.419 Relative Write Throughput: 0 00:08:07.419 Relative Write Latency: 0 00:08:07.419 Idle Power: Not Reported 00:08:07.419 Active Power: Not Reported 00:08:07.419 Non-Operational Permissive Mode: Not Supported 00:08:07.419 00:08:07.419 Health Information 00:08:07.419 ================== 00:08:07.419 Critical Warnings: 00:08:07.419 Available Spare Space: OK 00:08:07.419 Temperature: OK 00:08:07.419 Device Reliability: OK 00:08:07.419 Read Only: No 00:08:07.419 Volatile Memory Backup: OK 00:08:07.419 Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.419 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:07.419 Available Spare: 0% 00:08:07.419 Available Spare Threshold: 0% 00:08:07.419 Life Percentage Used: 0% 00:08:07.419 Data Units Read: 879 00:08:07.419 Data Units Written: 808 00:08:07.419 Host Read Commands: 38681 00:08:07.419 Host Write Commands: 38104 00:08:07.419 Controller Busy Time: 0 minutes 00:08:07.419 Power Cycles: 0 00:08:07.419 Power On Hours: 0 hours 00:08:07.419 Unsafe Shutdowns: 0 00:08:07.419 Unrecoverable Media Errors: 0 00:08:07.419 Lifetime Error Log Entries: 0 00:08:07.419 Warning Temperature Time: 0 minutes 00:08:07.419 Critical Temperature Time: 0 minutes 00:08:07.419 00:08:07.419 Number of Queues 00:08:07.419 ================ 00:08:07.419 Number of I/O Submission Queues: 64 00:08:07.419 Number of I/O Completion Queues: 64 00:08:07.419 00:08:07.419 ZNS Specific Controller Data 00:08:07.419 ============================ 00:08:07.419 Zone Append Size Limit: 0 00:08:07.419 00:08:07.419 00:08:07.419 Active Namespaces 00:08:07.419 ================= 00:08:07.419 Namespace ID:1 00:08:07.419 Error Recovery Timeout: Unlimited 00:08:07.419 Command Set Identifier: NVM (00h) 00:08:07.419 Deallocate: Supported 00:08:07.419 Deallocated/Unwritten Error: Supported 00:08:07.419 Deallocated Read Value: All 0x00 00:08:07.419 Deallocate in Write Zeroes: Not Supported 00:08:07.419 Deallocated Guard Field: 0xFFFF 00:08:07.419 Flush: Supported 00:08:07.419 Reservation: Not Supported 00:08:07.419 Namespace Sharing Capabilities: Multiple Controllers 00:08:07.419 Size (in LBAs): 262144 (1GiB) 00:08:07.419 Capacity (in LBAs): 262144 (1GiB) 00:08:07.419 Utilization (in LBAs): 262144 (1GiB) 00:08:07.419 Thin Provisioning: Not Supported 00:08:07.419 Per-NS Atomic Units: No 00:08:07.419 Maximum Single Source Range Length: 128 00:08:07.419 Maximum Copy Length: 128 00:08:07.419 Maximum Source Range Count: 128 00:08:07.419 NGUID/EUI64 Never Reused: No 00:08:07.419 Namespace Write Protected: No 00:08:07.419 Endurance group ID: 1 00:08:07.419 Number of LBA Formats: 8 00:08:07.419 Current LBA Format: LBA Format #04 00:08:07.419 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:07.419 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:07.419 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:07.419 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:07.419 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:07.419 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:07.419 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:07.419 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:07.419 00:08:07.419 Get Feature FDP: 00:08:07.419 ================ 00:08:07.419 Enabled: Yes 00:08:07.419 FDP configuration index: 0 00:08:07.419 00:08:07.419 FDP configurations log page 00:08:07.419 =========================== 00:08:07.419 Number of FDP configurations: 1 00:08:07.419 Version: 0 00:08:07.419 Size: 112 00:08:07.419 FDP Configuration Descriptor: 0 00:08:07.419 Descriptor Size: 96 00:08:07.419 Reclaim Group Identifier format: 2 00:08:07.419 FDP Volatile Write Cache: Not Present 00:08:07.419 FDP Configuration: Valid 00:08:07.419 Vendor Specific Size: 0 00:08:07.419 Number of Reclaim Groups: 2 00:08:07.419 Number of Recalim Unit Handles: 8 00:08:07.419 Max Placement Identifiers: 128 00:08:07.419 Number of Namespaces Suppprted: 256 00:08:07.419 Reclaim unit Nominal Size: 6000000 bytes 00:08:07.419 Estimated Reclaim Unit Time Limit: Not Reported 00:08:07.419 RUH Desc #000: RUH Type: Initially Isolated 00:08:07.419 RUH Desc #001: RUH Type: Initially Isolated 00:08:07.419 RUH Desc #002: RUH Type: Initially Isolated 00:08:07.419 RUH Desc #003: RUH Type: Initially Isolated 00:08:07.419 RUH Desc #004: RUH Type: Initially Isolated 00:08:07.419 RUH Desc #005: RUH Type: Initially Isolated 00:08:07.419 RUH Desc #006: RUH Type: Initially Isolated 00:08:07.419 RUH Desc #007: RUH Type: Initially Isolated 00:08:07.419 00:08:07.419 FDP reclaim unit handle usage log page 00:08:07.419 ====================================== 00:08:07.419 Number of Reclaim Unit Handles: 8 00:08:07.419 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:07.419 RUH Usage Desc #001: RUH Attributes: Unused 00:08:07.419 RUH Usage Desc #002: RUH Attributes: Unused 00:08:07.419 RUH Usage Desc #003: RUH Attributes: Unused 00:08:07.419 RUH Usage Desc #004: RUH Attributes: Unused 00:08:07.419 RUH Usage Desc #005: RUH Attributes: Unused 00:08:07.419 RUH Usage Desc #006: RUH Attributes: Unused 00:08:07.419 RUH Usage Desc #007: RUH Attributes: Unused 00:08:07.419 00:08:07.419 FDP statistics log page 00:08:07.419 ======================= 00:08:07.419 Host bytes with metadata written: 515743744 00:08:07.419 Media bytes with metadata written: 515801088 00:08:07.419 Media bytes erased: 0 00:08:07.419 00:08:07.419 FDP events log page 00:08:07.419 =================== 00:08:07.419 Number of FDP events: 0 00:08:07.419 00:08:07.419 NVM Specific Namespace Data 00:08:07.419 =========================== 00:08:07.419 Logical Block Storage Tag Mask: 0 00:08:07.419 Protection Information Capabilities: 00:08:07.419 16b Guard Protection Information Storage Tag Support: No 00:08:07.419 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:07.419 Storage Tag Check Read Support: No 00:08:07.419 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.419 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.419 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.420 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.420 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.420 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.420 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.420 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.420 ===================================================== 00:08:07.420 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:07.420 ===================================================== 00:08:07.420 Controller Capabilities/Features 00:08:07.420 ================================ 00:08:07.420 Vendor ID: 1b36 00:08:07.420 Subsystem Vendor ID: 1af4 00:08:07.420 Serial Number: 12340 00:08:07.420 Model Number: QEMU NVMe Ctrl 00:08:07.420 Firmware Version: 8.0.0 00:08:07.420 Recommended Arb Burst: 6 00:08:07.420 IEEE OUI Identifier: 00 54 52 00:08:07.420 Multi-path I/O 00:08:07.420 May have multiple subsystem ports: No 00:08:07.420 May have multiple controllers: No 00:08:07.420 Associated with SR-IOV VF: No 00:08:07.420 Max Data Transfer Size: 524288 00:08:07.420 Max Number of Namespaces: 256 00:08:07.420 Max Number of I/O Queues: 64 00:08:07.420 NVMe Specification Version (VS): 1.4 00:08:07.420 NVMe Specification Version (Identify): 1.4 00:08:07.420 Maximum Queue Entries: 2048 00:08:07.420 Contiguous Queues Required: Yes 00:08:07.420 Arbitration Mechanisms Supported 00:08:07.420 Weighted Round Robin: Not Supported 00:08:07.420 Vendor Specific: Not Supported 00:08:07.420 Reset Timeout: 7500 ms 00:08:07.420 Doorbell Stride: 4 bytes 00:08:07.420 NVM Subsystem Reset: Not Supported 00:08:07.420 Command Sets Supported 00:08:07.420 NVM Command Set: Supported 00:08:07.420 Boot Partition: Not Supported 00:08:07.420 Memory Page Size Minimum: 4096 bytes 00:08:07.420 Memory Page Size Maximum: 65536 bytes 00:08:07.420 Persistent Memory Region: Not Supported 00:08:07.420 Optional Asynchronous Events Supported 00:08:07.420 Namespace Attribute Notices: Supported 00:08:07.420 Firmware Activation Notices: Not Supported 00:08:07.420 ANA Change Notices: Not Supported 00:08:07.420 PLE Aggregate Log Change Notices: Not Supported 00:08:07.420 LBA Status Info Alert Notices: Not Supported 00:08:07.420 EGE Aggregate Log Change Notices: Not Supported 00:08:07.420 Normal NVM Subsystem Shutdown event: Not Supported 00:08:07.420 Zone Descriptor Change Notices: Not Supported 00:08:07.420 Discovery Log Change Notices: Not Supported 00:08:07.420 Controller Attributes 00:08:07.420 128-bit Host Identifier: Not Supported 00:08:07.420 Non-Operational Permissive Mode: Not Supported 00:08:07.420 NVM Sets: Not Supported 00:08:07.420 Read Recovery Levels: Not Supported 00:08:07.420 Endurance Groups: Not Supported 00:08:07.420 Predictable Latency Mode: Not Supported 00:08:07.420 Traffic Based Keep ALive: Not Supported 00:08:07.420 Namespace Granularity: Not Supported 00:08:07.420 SQ Associations: Not Supported 00:08:07.420 UUID List: Not Supported 00:08:07.420 Multi-Domain Subsystem: Not Supported 00:08:07.420 Fixed Capacity Management: Not Supported 00:08:07.420 Variable Capacity Management: Not Supported 00:08:07.420 Delete Endurance Group: Not Supported 00:08:07.420 Delete NVM Set: Not Supported 00:08:07.420 Extended LBA Formats Supported: Supported 00:08:07.420 Flexible Data Placement Supported: Not Supported 00:08:07.420 00:08:07.420 Controller Memory Buffer Support 00:08:07.420 ================================ 00:08:07.420 Supported: No 00:08:07.420 00:08:07.420 Persistent Memory Region Support 00:08:07.420 ================================ 00:08:07.420 Supported: No 00:08:07.420 00:08:07.420 Admin Command Set Attributes 00:08:07.420 ============================ 00:08:07.420 Security Send/Receive: Not Supported 00:08:07.420 Format NVM: Supported 00:08:07.420 Firmware Activate/Download: Not Supported 00:08:07.420 Namespace Management: Supported 00:08:07.420 Device Self-Test: Not Supported 00:08:07.420 Directives: Supported 00:08:07.420 NVMe-MI: Not Supported 00:08:07.420 Virtualization Management: Not Supported 00:08:07.420 Doorbell Buffer Config: Supported 00:08:07.420 Get LBA Status Capability: Not Supported 00:08:07.420 Command & Feature Lockdown Capability: Not Supported 00:08:07.420 Abort Command Limit: 4 00:08:07.420 Async Event Request Limit: 4 00:08:07.420 Number of Firmware Slots: N/A 00:08:07.420 Firmware Slot 1 Read-Only: N/A 00:08:07.420 Firmware Activation Without Reset: N/A 00:08:07.420 Multiple Update Detection Support: N/A 00:08:07.420 Firmware Update Granularity: No Information Provided 00:08:07.420 Per-Namespace SMART Log: Yes 00:08:07.420 Asymmetric Namespace Access Log Page: Not Supported 00:08:07.420 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:07.420 Command Effects Log Page: Supported 00:08:07.420 Get Log Page Extended Data: Supported 00:08:07.420 Telemetry Log Pages: Not Supported 00:08:07.420 Persistent Event Log Pages: Not Supported 00:08:07.420 Supported Log Pages Log Page: May Support 00:08:07.420 Commands Supported & Effects Log Page: Not Supported 00:08:07.420 Feature Identifiers & Effects Log Page:May Support 00:08:07.420 NVMe-MI Commands & Effects Log Page: May Support 00:08:07.420 Data Area 4 for Telemetry Log: Not Supported 00:08:07.420 Error Log Page Entries Supported: 1 00:08:07.420 Keep Alive: Not Supported 00:08:07.420 00:08:07.420 NVM Command Set Attributes 00:08:07.420 ========================== 00:08:07.420 Submission Queue Entry Size 00:08:07.420 Max: 64 00:08:07.420 Min: 64 00:08:07.420 Completion Queue Entry Size 00:08:07.420 Max: 16 00:08:07.420 Min: 16 00:08:07.420 Number of Namespaces: 256 00:08:07.420 Compare Command: Supported 00:08:07.420 Write Uncorrectable Command: Not Supported 00:08:07.420 Dataset Management Command: Supported 00:08:07.420 Write Zeroes Command: Supported 00:08:07.420 Set Features Save Field: Supported 00:08:07.420 Reservations: Not Supported 00:08:07.420 Timestamp: Supported 00:08:07.420 Copy: Supported 00:08:07.420 Volatile Write Cache: Present 00:08:07.420 Atomic Write Unit (Normal): 1 00:08:07.420 Atomic Write Unit (PFail): 1 00:08:07.420 Atomic Compare & Write Unit: 1 00:08:07.420 Fused Compare & Write: Not Supported 00:08:07.420 Scatter-Gather List 00:08:07.420 SGL Command Set: Supported 00:08:07.420 SGL Keyed: Not Supported 00:08:07.420 SGL Bit Bucket Descriptor: Not Supported 00:08:07.420 SGL Metadata Pointer: Not Supported 00:08:07.420 Oversized SGL: Not Supported 00:08:07.420 SGL Metadata Address: Not Supported 00:08:07.420 SGL Offset: Not Supported 00:08:07.420 Transport SGL Data Block: Not Supported 00:08:07.420 Replay Protected Memory Block: Not Supported 00:08:07.420 00:08:07.420 Firmware Slot Information 00:08:07.420 ========================= 00:08:07.420 Active slot: 1 00:08:07.420 Slot 1 Firmware Revision: 1.0 00:08:07.420 00:08:07.420 00:08:07.420 Commands Supported and Effects 00:08:07.421 ============================== 00:08:07.421 Admin Commands 00:08:07.421 -------------- 00:08:07.421 Delete I/O Submission Queue (00h): Supported 00:08:07.421 Create I/O Submission Queue (01h): Supported 00:08:07.421 Get Log Page (02h): Supported 00:08:07.421 Delete I/O Completion Queue (04h): Supported 00:08:07.421 Create I/O Completion Queue (05h): Supported 00:08:07.421 Identify (06h): Supported 00:08:07.421 Abort (08h): Supported 00:08:07.421 Set Features (09h): Supported 00:08:07.421 Get Features (0Ah): Supported 00:08:07.421 Asynchronous Event Request (0Ch): Supported 00:08:07.421 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:07.421 Directive Send (19h): Supported 00:08:07.421 Directive Receive (1Ah): Supported 00:08:07.421 Virtualization Management (1Ch): Supported 00:08:07.421 Doorbell Buffer Config (7Ch): Supported 00:08:07.421 Format NVM (80h): Supported LBA-Change 00:08:07.421 I/O Commands 00:08:07.421 ------------ 00:08:07.421 Flush (00h): Supported LBA-Change 00:08:07.421 Write (01h): Supported LBA-Change 00:08:07.421 Read (02h): Supported 00:08:07.421 Compare (05h): Supported 00:08:07.421 Write Zeroes (08h): Supported LBA-Change 00:08:07.421 Dataset Management (09h): Supported LBA-Change 00:08:07.421 Unknown (0Ch): Supported 00:08:07.421 Unknown (12h): Supported 00:08:07.421 Copy (19h): Supported LBA-Change 00:08:07.421 Unknown (1Dh): Supported LBA-Change 00:08:07.421 00:08:07.421 Error Log 00:08:07.421 ========= 00:08:07.421 00:08:07.421 Arbitration 00:08:07.421 =========== 00:08:07.421 Arbitration Burst: no limit 00:08:07.421 00:08:07.421 Power Management 00:08:07.421 ================ 00:08:07.421 Number of Power States: 1 00:08:07.421 Current Power State: Power State #0 00:08:07.421 Power State #0: 00:08:07.421 Max Power: 25.00 W 00:08:07.421 Non-Operational State: Operational 00:08:07.421 Entry Latency: 16 microseconds 00:08:07.421 Exit Latency: 4 microseconds 00:08:07.421 Relative Read Throughput: 0 00:08:07.421 Relative Read Latency: 0 00:08:07.421 Relative Write Throughput: 0 00:08:07.421 Relative Write Latency: 0 00:08:07.421 Idle Power: Not Reported 00:08:07.421 Active Power: Not Reported 00:08:07.421 Non-Operational Permissive Mode: Not Supported 00:08:07.421 00:08:07.421 Health Information 00:08:07.421 ================== 00:08:07.421 Critical Warnings: 00:08:07.421 Available Spare Space: OK 00:08:07.421 Temperature: OK 00:08:07.421 Device Reliability: OK 00:08:07.421 Read Only: No 00:08:07.421 Volatile Memory Backup: OK 00:08:07.421 Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.421 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:07.421 Available Spare: 0% 00:08:07.421 Available Spare Threshold: 0% 00:08:07.421 Life Percentage Used: 0% 00:08:07.421 Data Units Read: 683 00:08:07.421 Data Units Written: 611 00:08:07.421 Host Read Commands: 36865 00:08:07.421 Host Write Commands: 36651 00:08:07.421 Controller Busy Time: 0 minutes 00:08:07.421 Power Cycles: 0 00:08:07.421 Power On Hours: 0 hours 00:08:07.421 Unsafe Shutdowns: 0 00:08:07.421 Unrecoverable Media Errors: 0 00:08:07.421 Lifetime Error Log Entries: 0 00:08:07.421 Warning Temperature Time: 0 minutes 00:08:07.421 Critical Temperature Time: 0 minutes 00:08:07.421 00:08:07.421 Number of Queues 00:08:07.421 ================ 00:08:07.421 Number of I/O Submission Queues: 64 00:08:07.421 Number of I/O Completion Queues: 64 00:08:07.421 00:08:07.421 ZNS Specific Controller Data 00:08:07.421 ============================ 00:08:07.421 Zone Append Size Limit: 0 00:08:07.421 00:08:07.421 00:08:07.421 Active Namespaces 00:08:07.421 ================= 00:08:07.421 Namespace ID:1 00:08:07.421 Error Recovery Timeout: Unlimited 00:08:07.421 Command Set Identifier: NVM (00h) 00:08:07.421 Deallocate: Supported 00:08:07.421 Deallocated/Unwritten Error: Supported 00:08:07.421 Deallocated Read Value: All 0x00 00:08:07.421 Deallocate in Write Zeroes: Not Supported 00:08:07.421 Deallocated Guard Field: 0xFFFF 00:08:07.421 Flush: Supported 00:08:07.421 Reservation: Not Supported 00:08:07.421 Metadata Transferred as: Separate Metadata Buffer 00:08:07.421 Namespace Sharing Capabilities: Private 00:08:07.421 Size (in LBAs): 1548666 (5GiB) 00:08:07.421 Capacity (in LBAs): 1548666 (5GiB) 00:08:07.421 Utilization (in LBAs): 1548666 (5GiB) 00:08:07.421 Thin Provisioning: Not Supported 00:08:07.421 Per-NS Atomic Units: No 00:08:07.421 Maximum Single Source Range Length: 128 00:08:07.421 Maximum Copy Length: 128 00:08:07.421 Maximum Source Range Count: 128 00:08:07.421 NGUID/EUI64 Never Reused: No 00:08:07.421 Namespace Write Protected: No 00:08:07.421 Number of LBA Formats: 8 00:08:07.421 Current LBA Format: LBA Format #07 00:08:07.421 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:07.421 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:07.421 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:07.421 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:07.421 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:07.421 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:07.421 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:07.421 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:07.421 00:08:07.421 NVM Specific Namespace Data 00:08:07.421 =========================== 00:08:07.421 Logical Block Storage Tag Mask: 0 00:08:07.421 Protection Information Capabilities: 00:08:07.421 16b Guard Protection Information Storage Tag Support: No 00:08:07.421 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:07.421 Storage Tag Check Read Support: No 00:08:07.421 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.421 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.421 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.421 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.421 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.421 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.421 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.421 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.421 ===================================================== 00:08:07.421 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:07.421 ===================================================== 00:08:07.421 Controller Capabilities/Features 00:08:07.421 ================================ 00:08:07.421 Vendor ID: 1b36 00:08:07.421 Subsystem Vendor ID: 1af4 00:08:07.421 Serial Number: 12341 00:08:07.421 Model Number: QEMU NVMe Ctrl 00:08:07.421 Firmware Version: 8.0.0 00:08:07.421 Recommended Arb Burst: 6 00:08:07.421 IEEE OUI Identifier: 00 54 52 00:08:07.421 Multi-path I/O 00:08:07.421 May have multiple subsystem ports: No 00:08:07.421 May have multiple controllers: No 00:08:07.421 Associated with SR-IOV VF: No 00:08:07.421 Max Data Transfer Size: 524288 00:08:07.421 Max Number of Namespaces: 256 00:08:07.421 Max Number of I/O Queues: 64 00:08:07.421 NVMe Specification Version (VS): 1.4 00:08:07.421 NVMe Specification Version (Identify): 1.4 00:08:07.421 Maximum Queue Entries: 2048 00:08:07.421 Contiguous Queues Required: Yes 00:08:07.421 Arbitration Mechanisms Supported 00:08:07.421 Weighted Round Robin: Not Supported 00:08:07.421 Vendor Specific: Not Supported 00:08:07.421 Reset Timeout: 7500 ms 00:08:07.421 Doorbell Stride: 4 bytes 00:08:07.421 NVM Subsystem Reset: Not Supported 00:08:07.421 Command Sets Supported 00:08:07.421 NVM Command Set: Supported 00:08:07.421 Boot Partition: Not Supported 00:08:07.421 Memory Page Size Minimum: 4096 bytes 00:08:07.421 Memory Page Size Maximum: 65536 bytes 00:08:07.421 Persistent Memory Region: Not Supported 00:08:07.421 Optional Asynchronous Events Supported 00:08:07.421 Namespace Attribute Notices: Supported 00:08:07.421 Firmware Activation Notices: Not Supported 00:08:07.421 ANA Change Notices: Not Supported 00:08:07.421 PLE Aggregate Log Change Notices: Not Supported 00:08:07.421 LBA Status Info Alert Notices: Not Supported 00:08:07.421 EGE Aggregate Log Change Notices: Not Supported 00:08:07.422 Normal NVM Subsystem Shutdown event: Not Supported 00:08:07.422 Zone Descriptor Change Notices: Not Supported 00:08:07.422 Discovery Log Change Notices: Not Supported 00:08:07.422 Controller Attributes 00:08:07.422 128-bit Host Identifier: Not Supported 00:08:07.422 Non-Operational Permissive Mode: Not Supported 00:08:07.422 NVM Sets: Not Supported 00:08:07.422 Read Recovery Levels: Not Supported 00:08:07.422 Endurance Groups: Not Supported 00:08:07.422 Predictable Latency Mode: Not Supported 00:08:07.422 Traffic Based Keep ALive: Not Supported 00:08:07.422 Namespace Granularity: Not Supported 00:08:07.422 SQ Associations: Not Supported 00:08:07.422 UUID List: Not Supported 00:08:07.422 Multi-Domain Subsystem: Not Supported 00:08:07.422 Fixed Capacity Management: Not Supported 00:08:07.422 Variable Capacity Management: Not Supported 00:08:07.422 Delete Endurance Group: Not Supported 00:08:07.422 Delete NVM Set: Not Supported 00:08:07.422 Extended LBA Formats Supported: Supported 00:08:07.422 Flexible Data Placement Supported: Not Supported 00:08:07.422 00:08:07.422 Controller Memory Buffer Support 00:08:07.422 ================================ 00:08:07.422 Supported: No 00:08:07.422 00:08:07.422 Persistent Memory Region Support 00:08:07.422 ================================ 00:08:07.422 Supported: No 00:08:07.422 00:08:07.422 Admin Command Set Attributes 00:08:07.422 ============================ 00:08:07.422 Security Send/Receive: Not Supported 00:08:07.422 Format NVM: Supported 00:08:07.422 Firmware Activate/Download: Not Supported 00:08:07.422 Namespace Management: Supported 00:08:07.422 Device Self-Test: Not Supported 00:08:07.422 Directives: Supported 00:08:07.422 NVMe-MI: Not Supported 00:08:07.422 Virtualization Management: Not Supported 00:08:07.422 Doorbell Buffer Config: Supported 00:08:07.422 Get LBA Status Capability: Not Supported 00:08:07.422 Command & Feature Lockdown Capability: Not Supported 00:08:07.422 Abort Command Limit: 4 00:08:07.422 Async Event Request Limit: 4 00:08:07.422 Number of Firmware Slots: N/A 00:08:07.422 Firmware Slot 1 Read-Only: N/A 00:08:07.422 Firmware Activation Without Reset: N/A 00:08:07.422 Multiple Update Detection Support: N/A 00:08:07.422 Firmware Update Granularity: No Information Provided 00:08:07.422 Per-Namespace SMART Log: Yes 00:08:07.422 Asymmetric Namespace Access Log Page: Not Supported 00:08:07.422 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:07.422 Command Effects Log Page: Supported 00:08:07.422 Get Log Page Extended Data: Supported 00:08:07.422 Telemetry Log Pages: Not Supported 00:08:07.422 Persistent Event Log Pages: Not Supported 00:08:07.422 Supported Log Pages Log Page: May Support 00:08:07.422 Commands Supported & Effects Log Page: Not Supported 00:08:07.422 Feature Identifiers & Effects Log Page:May Support 00:08:07.422 NVMe-MI Commands & Effects Log Page: May Support 00:08:07.422 Data Area 4 for Telemetry Log: Not Supported 00:08:07.422 Error Log Page Entries Supported: 1 00:08:07.422 Keep Alive: Not Supported 00:08:07.422 00:08:07.422 NVM Command Set Attributes 00:08:07.422 ========================== 00:08:07.422 Submission Queue Entry Size 00:08:07.422 Max: 64 00:08:07.422 Min: 64 00:08:07.422 Completion Queue Entry Size 00:08:07.422 Max: 16 00:08:07.422 Min: 16 00:08:07.422 Number of Namespaces: 256 00:08:07.422 Compare Command: Supported 00:08:07.422 Write Uncorrectable Command: Not Supported 00:08:07.422 Dataset Management Command: Supported 00:08:07.422 Write Zeroes Command: Supported 00:08:07.422 Set Features Save Field: Supported 00:08:07.422 Reservations: Not Supported 00:08:07.422 Timestamp: Supported 00:08:07.422 Copy: Supported 00:08:07.422 Volatile Write Cache: Present 00:08:07.422 Atomic Write Unit (Normal): 1 00:08:07.422 Atomic Write Unit (PFail): 1 00:08:07.422 Atomic Compare & Write Unit: 1 00:08:07.422 Fused Compare & Write: Not Supported 00:08:07.422 Scatter-Gather List 00:08:07.422 SGL Command Set: Supported 00:08:07.422 SGL Keyed: Not Supported 00:08:07.422 SGL Bit Bucket Descriptor: Not Supported 00:08:07.422 SGL Metadata Pointer: Not Supported 00:08:07.422 Oversized SGL: Not Supported 00:08:07.422 SGL Metadata Address: Not Supported 00:08:07.422 SGL Offset: Not Supported 00:08:07.422 Transport SGL Data Block: Not Supported 00:08:07.422 Replay Protected Memory Block: Not Supported 00:08:07.422 00:08:07.422 Firmware Slot Information 00:08:07.422 ========================= 00:08:07.422 Active slot: 1 00:08:07.422 Slot 1 Firmware Revision: 1.0 00:08:07.422 00:08:07.422 00:08:07.422 Commands Supported and Effects 00:08:07.422 ============================== 00:08:07.422 Admin Commands 00:08:07.422 -------------- 00:08:07.422 Delete I/O Submission Queue (00h): Supported 00:08:07.422 Create I/O Submission Queue (01h): Supported 00:08:07.422 Get Log Page (02h): Supported 00:08:07.422 Delete I/O Completion Queue (04h): Supported 00:08:07.422 Create I/O Completion Queue (05h): Supported 00:08:07.422 Identify (06h): Supported 00:08:07.422 Abort (08h): Supported 00:08:07.422 Set Features (09h): Supported 00:08:07.422 Get Features (0Ah): Supported 00:08:07.422 Asynchronous Event Request (0Ch): Supported 00:08:07.422 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:07.422 Directive Send (19h): Supported 00:08:07.422 Directive Receive (1Ah): Supported 00:08:07.422 Virtualization Management (1Ch): Supported 00:08:07.422 Doorbell Buffer Config (7Ch): Supported 00:08:07.422 Format NVM (80h): Supported LBA-Change 00:08:07.422 I/O Commands 00:08:07.422 ------------ 00:08:07.422 Flush (00h): Supported LBA-Change 00:08:07.422 Write (01h): Supported LBA-Change 00:08:07.422 Read (02h): Supported 00:08:07.422 Compare (05h): Supported 00:08:07.422 Write Zeroes (08h): Supported LBA-Change 00:08:07.422 Dataset Management (09h): Supported LBA-Change 00:08:07.422 Unknown (0Ch): Supported 00:08:07.422 Unknown (12h): Supported 00:08:07.422 Copy (19h): Supported LBA-Change 00:08:07.422 Unknown (1Dh): Supported LBA-Change 00:08:07.422 00:08:07.422 Error Log 00:08:07.422 ========= 00:08:07.422 00:08:07.422 Arbitration 00:08:07.422 =========== 00:08:07.422 Arbitration Burst: no limit 00:08:07.422 00:08:07.422 Power Management 00:08:07.422 ================ 00:08:07.422 Number of Power States: 1 00:08:07.422 Current Power State: Power State #0 00:08:07.422 Power State #0: 00:08:07.422 Max Power: 25.00 W 00:08:07.422 Non-Operational State: Operational 00:08:07.422 Entry Latency: 16 microseconds 00:08:07.422 Exit Latency: 4 microseconds 00:08:07.422 Relative Read Throughput: 0 00:08:07.422 Relative Read Latency: 0 00:08:07.422 Relative Write Throughput: 0 00:08:07.422 Relative Write Latency: 0 00:08:07.422 Idle Power: Not Reported 00:08:07.422 Active Power: Not Reported 00:08:07.422 Non-Operational Permissive Mode: Not Supported 00:08:07.422 00:08:07.422 Health Information 00:08:07.422 ================== 00:08:07.422 Critical Warnings: 00:08:07.422 Available Spare Space: OK 00:08:07.422 Temperature: OK 00:08:07.422 Device Reliability: OK 00:08:07.422 Read Only: No 00:08:07.422 Volatile Memory Backup: OK 00:08:07.422 Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.422 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:07.422 Available Spare: 0% 00:08:07.422 Available Spare Threshold: 0% 00:08:07.422 Life Percentage Used: 0% 00:08:07.422 Data Units Read: 1053 00:08:07.422 Data Units Written: 926 00:08:07.422 Host Read Commands: 54913 00:08:07.422 Host Write Commands: 53802 00:08:07.422 Controller Busy Time: 0 minutes 00:08:07.422 Power Cycles: 0 00:08:07.423 Power On Hours: 0 hours 00:08:07.423 Unsafe Shutdowns: 0 00:08:07.423 Unrecoverable Media Errors: 0 00:08:07.423 Lifetime Error Log Entries: 0 00:08:07.423 Warning Temperature Time: 0 minutes 00:08:07.423 Critical Temperature Time: 0 minutes 00:08:07.423 00:08:07.423 Number of Queues 00:08:07.423 ================ 00:08:07.423 Number of I/O Submission Queues: 64 00:08:07.423 Number of I/O Completion Queues: 64 00:08:07.423 00:08:07.423 ZNS Specific Controller Data 00:08:07.423 ============================ 00:08:07.423 Zone Append Size Limit: 0 00:08:07.423 00:08:07.423 00:08:07.423 Active Namespaces 00:08:07.423 ================= 00:08:07.423 Namespace ID:1 00:08:07.423 Error Recovery Timeout: Unlimited 00:08:07.423 Command Set Identifier: NVM (00h) 00:08:07.423 Deallocate: Supported 00:08:07.423 Deallocated/Unwritten Error: Supported 00:08:07.423 Deallocated Read Value: All 0x00 00:08:07.423 Deallocate in Write Zeroes: Not Supported 00:08:07.423 Deallocated Guard Field: 0xFFFF 00:08:07.423 Flush: Supported 00:08:07.423 Reservation: Not Supported 00:08:07.423 Namespace Sharing Capabilities: Private 00:08:07.423 Size (in LBAs): 1310720 (5GiB) 00:08:07.423 Capacity (in LBAs): 1310720 (5GiB) 00:08:07.423 Utilization (in LBAs): 1310720 (5GiB) 00:08:07.423 Thin Provisioning: Not Supported 00:08:07.423 Per-NS Atomic Units: No 00:08:07.423 Maximum Single Source Range Length: 128 00:08:07.423 Maximum Copy Length: 128 00:08:07.423 Maximum Source Range Count: 128 00:08:07.423 NGUID/EUI64 Never Reused: No 00:08:07.423 Namespace Write Protected: No 00:08:07.423 Number of LBA Formats: 8 00:08:07.423 Current LBA Format: LBA Format #04 00:08:07.423 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:07.423 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:07.423 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:07.423 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:07.423 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:07.423 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:07.423 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:07.423 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:07.423 00:08:07.423 NVM Specific Namespace Data 00:08:07.423 =========================== 00:08:07.423 Logical Block Storage Tag Mask: 0 00:08:07.423 Protection Information Capabilities: 00:08:07.423 16b Guard Protection Information Storage Tag Support: No 00:08:07.423 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:07.423 Storage Tag Check Read Support: No 00:08:07.423 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.423 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.423 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.423 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.423 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.423 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.423 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.423 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.423 ===================================================== 00:08:07.423 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:07.423 ===================================================== 00:08:07.423 Controller Capabilities/Features 00:08:07.423 ================================ 00:08:07.423 Vendor ID: 1b36 00:08:07.423 Subsystem Vendor ID: 1af4 00:08:07.423 Serial Number: 12342 00:08:07.423 Model Number: QEMU NVMe Ctrl 00:08:07.423 Firmware Version: 8.0.0 00:08:07.423 Recommended Arb Burst: 6 00:08:07.423 IEEE OUI Identifier: 00 54 52 00:08:07.423 Multi-path I/O 00:08:07.423 May have multiple subsystem ports: No 00:08:07.423 May have multiple controllers: No 00:08:07.423 Associated with SR-IOV VF: No 00:08:07.423 Max Data Transfer Size: 524288 00:08:07.423 Max Number of Namespaces: 256 00:08:07.423 Max Number of I/O Queues: 64 00:08:07.423 NVMe Specification Version (VS): 1.4 00:08:07.423 NVMe Specification Version (Identify): 1.4 00:08:07.423 Maximum Queue Entries: 2048 00:08:07.423 Contiguous Queues Required: Yes 00:08:07.423 Arbitration Mechanisms Supported 00:08:07.423 Weighted Round Robin: Not Supported 00:08:07.423 Vendor Specific: Not Supported 00:08:07.423 Reset Timeout: 7500 ms 00:08:07.423 Doorbell Stride: 4 bytes 00:08:07.423 NVM Subsystem Reset: Not Supported 00:08:07.423 Command Sets Supported 00:08:07.423 NVM Command Set: Supported 00:08:07.423 Boot Partition: Not Supported 00:08:07.423 Memory Page Size Minimum: 4096 bytes 00:08:07.423 Memory Page Size Maximum: 65536 bytes 00:08:07.423 Persistent Memory Region: Not Supported 00:08:07.423 Optional Asynchronous Events Supported 00:08:07.423 Namespace Attribute Notices: Supported 00:08:07.423 Firmware Activation Notices: Not Supported 00:08:07.423 ANA Change Notices: Not Supported 00:08:07.423 PLE Aggregate Log Change Notices: Not Supported 00:08:07.423 LBA Status Info Alert Notices: Not Supported 00:08:07.423 EGE Aggregate Log Change Notices: Not Supported 00:08:07.423 Normal NVM Subsystem Shutdown event: Not Supported 00:08:07.423 Zone Descriptor Change Notices: Not Supported 00:08:07.423 Discovery Log Change Notices: Not Supported 00:08:07.423 Controller Attributes 00:08:07.424 128-bit Host Identifier: Not Supported 00:08:07.424 Non-Operational Permissive Mode: Not Supported 00:08:07.424 NVM Sets: Not Supported 00:08:07.424 Read Recovery Levels: Not Supported 00:08:07.424 Endurance Groups: Not Supported 00:08:07.424 Predictable Latency Mode: Not Supported 00:08:07.424 Traffic Based Keep ALive: Not Supported 00:08:07.424 Namespace Granularity: Not Supported 00:08:07.424 SQ Associations: Not Supported 00:08:07.424 UUID List: Not Supported 00:08:07.424 Multi-Domain Subsystem: Not Supported 00:08:07.424 Fixed Capacity Management: Not Supported 00:08:07.424 Variable Capacity Management: Not Supported 00:08:07.424 Delete Endurance Group: Not Supported 00:08:07.424 Delete NVM Set: Not[2024-11-20 23:56:57.684929] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 74682 terminated unexpected 00:08:07.424 [2024-11-20 23:56:57.686507] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 74682 terminated unexpected 00:08:07.424 [2024-11-20 23:56:57.687790] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 74682 terminated unexpected 00:08:07.424 Supported 00:08:07.424 Extended LBA Formats Supported: Supported 00:08:07.424 Flexible Data Placement Supported: Not Supported 00:08:07.424 00:08:07.424 Controller Memory Buffer Support 00:08:07.424 ================================ 00:08:07.424 Supported: No 00:08:07.424 00:08:07.424 Persistent Memory Region Support 00:08:07.424 ================================ 00:08:07.424 Supported: No 00:08:07.424 00:08:07.424 Admin Command Set Attributes 00:08:07.424 ============================ 00:08:07.424 Security Send/Receive: Not Supported 00:08:07.424 Format NVM: Supported 00:08:07.424 Firmware Activate/Download: Not Supported 00:08:07.424 Namespace Management: Supported 00:08:07.424 Device Self-Test: Not Supported 00:08:07.424 Directives: Supported 00:08:07.424 NVMe-MI: Not Supported 00:08:07.424 Virtualization Management: Not Supported 00:08:07.424 Doorbell Buffer Config: Supported 00:08:07.424 Get LBA Status Capability: Not Supported 00:08:07.424 Command & Feature Lockdown Capability: Not Supported 00:08:07.424 Abort Command Limit: 4 00:08:07.424 Async Event Request Limit: 4 00:08:07.424 Number of Firmware Slots: N/A 00:08:07.424 Firmware Slot 1 Read-Only: N/A 00:08:07.424 Firmware Activation Without Reset: N/A 00:08:07.424 Multiple Update Detection Support: N/A 00:08:07.424 Firmware Update Granularity: No Information Provided 00:08:07.424 Per-Namespace SMART Log: Yes 00:08:07.424 Asymmetric Namespace Access Log Page: Not Supported 00:08:07.424 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:07.424 Command Effects Log Page: Supported 00:08:07.424 Get Log Page Extended Data: Supported 00:08:07.424 Telemetry Log Pages: Not Supported 00:08:07.424 Persistent Event Log Pages: Not Supported 00:08:07.424 Supported Log Pages Log Page: May Support 00:08:07.424 Commands Supported & Effects Log Page: Not Supported 00:08:07.424 Feature Identifiers & Effects Log Page:May Support 00:08:07.424 NVMe-MI Commands & Effects Log Page: May Support 00:08:07.424 Data Area 4 for Telemetry Log: Not Supported 00:08:07.424 Error Log Page Entries Supported: 1 00:08:07.424 Keep Alive: Not Supported 00:08:07.424 00:08:07.424 NVM Command Set Attributes 00:08:07.424 ========================== 00:08:07.424 Submission Queue Entry Size 00:08:07.424 Max: 64 00:08:07.424 Min: 64 00:08:07.424 Completion Queue Entry Size 00:08:07.424 Max: 16 00:08:07.424 Min: 16 00:08:07.424 Number of Namespaces: 256 00:08:07.424 Compare Command: Supported 00:08:07.424 Write Uncorrectable Command: Not Supported 00:08:07.424 Dataset Management Command: Supported 00:08:07.424 Write Zeroes Command: Supported 00:08:07.424 Set Features Save Field: Supported 00:08:07.424 Reservations: Not Supported 00:08:07.424 Timestamp: Supported 00:08:07.424 Copy: Supported 00:08:07.424 Volatile Write Cache: Present 00:08:07.424 Atomic Write Unit (Normal): 1 00:08:07.424 Atomic Write Unit (PFail): 1 00:08:07.424 Atomic Compare & Write Unit: 1 00:08:07.424 Fused Compare & Write: Not Supported 00:08:07.424 Scatter-Gather List 00:08:07.424 SGL Command Set: Supported 00:08:07.424 SGL Keyed: Not Supported 00:08:07.424 SGL Bit Bucket Descriptor: Not Supported 00:08:07.424 SGL Metadata Pointer: Not Supported 00:08:07.424 Oversized SGL: Not Supported 00:08:07.424 SGL Metadata Address: Not Supported 00:08:07.424 SGL Offset: Not Supported 00:08:07.424 Transport SGL Data Block: Not Supported 00:08:07.424 Replay Protected Memory Block: Not Supported 00:08:07.424 00:08:07.424 Firmware Slot Information 00:08:07.424 ========================= 00:08:07.424 Active slot: 1 00:08:07.424 Slot 1 Firmware Revision: 1.0 00:08:07.424 00:08:07.424 00:08:07.424 Commands Supported and Effects 00:08:07.424 ============================== 00:08:07.424 Admin Commands 00:08:07.424 -------------- 00:08:07.424 Delete I/O Submission Queue (00h): Supported 00:08:07.424 Create I/O Submission Queue (01h): Supported 00:08:07.424 Get Log Page (02h): Supported 00:08:07.424 Delete I/O Completion Queue (04h): Supported 00:08:07.424 Create I/O Completion Queue (05h): Supported 00:08:07.424 Identify (06h): Supported 00:08:07.424 Abort (08h): Supported 00:08:07.424 Set Features (09h): Supported 00:08:07.424 Get Features (0Ah): Supported 00:08:07.424 Asynchronous Event Request (0Ch): Supported 00:08:07.424 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:07.424 Directive Send (19h): Supported 00:08:07.424 Directive Receive (1Ah): Supported 00:08:07.424 Virtualization Management (1Ch): Supported 00:08:07.424 Doorbell Buffer Config (7Ch): Supported 00:08:07.424 Format NVM (80h): Supported LBA-Change 00:08:07.424 I/O Commands 00:08:07.424 ------------ 00:08:07.424 Flush (00h): Supported LBA-Change 00:08:07.424 Write (01h): Supported LBA-Change 00:08:07.424 Read (02h): Supported 00:08:07.424 Compare (05h): Supported 00:08:07.424 Write Zeroes (08h): Supported LBA-Change 00:08:07.424 Dataset Management (09h): Supported LBA-Change 00:08:07.424 Unknown (0Ch): Supported 00:08:07.424 Unknown (12h): Supported 00:08:07.424 Copy (19h): Supported LBA-Change 00:08:07.424 Unknown (1Dh): Supported LBA-Change 00:08:07.424 00:08:07.424 Error Log 00:08:07.424 ========= 00:08:07.424 00:08:07.424 Arbitration 00:08:07.424 =========== 00:08:07.424 Arbitration Burst: no limit 00:08:07.424 00:08:07.424 Power Management 00:08:07.424 ================ 00:08:07.424 Number of Power States: 1 00:08:07.424 Current Power State: Power State #0 00:08:07.424 Power State #0: 00:08:07.424 Max Power: 25.00 W 00:08:07.424 Non-Operational State: Operational 00:08:07.424 Entry Latency: 16 microseconds 00:08:07.424 Exit Latency: 4 microseconds 00:08:07.424 Relative Read Throughput: 0 00:08:07.424 Relative Read Latency: 0 00:08:07.424 Relative Write Throughput: 0 00:08:07.425 Relative Write Latency: 0 00:08:07.425 Idle Power: Not Reported 00:08:07.425 Active Power: Not Reported 00:08:07.425 Non-Operational Permissive Mode: Not Supported 00:08:07.425 00:08:07.425 Health Information 00:08:07.425 ================== 00:08:07.425 Critical Warnings: 00:08:07.425 Available Spare Space: OK 00:08:07.425 Temperature: OK 00:08:07.425 Device Reliability: OK 00:08:07.425 Read Only: No 00:08:07.425 Volatile Memory Backup: OK 00:08:07.425 Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.425 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:07.425 Available Spare: 0% 00:08:07.425 Available Spare Threshold: 0% 00:08:07.425 Life Percentage Used: 0% 00:08:07.425 Data Units Read: 2242 00:08:07.425 Data Units Written: 2029 00:08:07.425 Host Read Commands: 112727 00:08:07.425 Host Write Commands: 110996 00:08:07.425 Controller Busy Time: 0 minutes 00:08:07.425 Power Cycles: 0 00:08:07.425 Power On Hours: 0 hours 00:08:07.425 Unsafe Shutdowns: 0 00:08:07.425 Unrecoverable Media Errors: 0 00:08:07.425 Lifetime Error Log Entries: 0 00:08:07.425 Warning Temperature Time: 0 minutes 00:08:07.425 Critical Temperature Time: 0 minutes 00:08:07.425 00:08:07.425 Number of Queues 00:08:07.425 ================ 00:08:07.425 Number of I/O Submission Queues: 64 00:08:07.425 Number of I/O Completion Queues: 64 00:08:07.425 00:08:07.425 ZNS Specific Controller Data 00:08:07.425 ============================ 00:08:07.425 Zone Append Size Limit: 0 00:08:07.425 00:08:07.425 00:08:07.425 Active Namespaces 00:08:07.425 ================= 00:08:07.425 Namespace ID:1 00:08:07.425 Error Recovery Timeout: Unlimited 00:08:07.425 Command Set Identifier: NVM (00h) 00:08:07.425 Deallocate: Supported 00:08:07.425 Deallocated/Unwritten Error: Supported 00:08:07.425 Deallocated Read Value: All 0x00 00:08:07.425 Deallocate in Write Zeroes: Not Supported 00:08:07.425 Deallocated Guard Field: 0xFFFF 00:08:07.425 Flush: Supported 00:08:07.425 Reservation: Not Supported 00:08:07.425 Namespace Sharing Capabilities: Private 00:08:07.425 Size (in LBAs): 1048576 (4GiB) 00:08:07.425 Capacity (in LBAs): 1048576 (4GiB) 00:08:07.425 Utilization (in LBAs): 1048576 (4GiB) 00:08:07.425 Thin Provisioning: Not Supported 00:08:07.425 Per-NS Atomic Units: No 00:08:07.425 Maximum Single Source Range Length: 128 00:08:07.425 Maximum Copy Length: 128 00:08:07.425 Maximum Source Range Count: 128 00:08:07.425 NGUID/EUI64 Never Reused: No 00:08:07.425 Namespace Write Protected: No 00:08:07.425 Number of LBA Formats: 8 00:08:07.425 Current LBA Format: LBA Format #04 00:08:07.425 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:07.425 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:07.425 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:07.425 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:07.425 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:07.425 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:07.425 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:07.425 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:07.425 00:08:07.425 NVM Specific Namespace Data 00:08:07.425 =========================== 00:08:07.425 Logical Block Storage Tag Mask: 0 00:08:07.425 Protection Information Capabilities: 00:08:07.425 16b Guard Protection Information Storage Tag Support: No 00:08:07.425 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:07.425 Storage Tag Check Read Support: No 00:08:07.425 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Namespace ID:2 00:08:07.425 Error Recovery Timeout: Unlimited 00:08:07.425 Command Set Identifier: NVM (00h) 00:08:07.425 Deallocate: Supported 00:08:07.425 Deallocated/Unwritten Error: Supported 00:08:07.425 Deallocated Read Value: All 0x00 00:08:07.425 Deallocate in Write Zeroes: Not Supported 00:08:07.425 Deallocated Guard Field: 0xFFFF 00:08:07.425 Flush: Supported 00:08:07.425 Reservation: Not Supported 00:08:07.425 Namespace Sharing Capabilities: Private 00:08:07.425 Size (in LBAs): 1048576 (4GiB) 00:08:07.425 Capacity (in LBAs): 1048576 (4GiB) 00:08:07.425 Utilization (in LBAs): 1048576 (4GiB) 00:08:07.425 Thin Provisioning: Not Supported 00:08:07.425 Per-NS Atomic Units: No 00:08:07.425 Maximum Single Source Range Length: 128 00:08:07.425 Maximum Copy Length: 128 00:08:07.425 Maximum Source Range Count: 128 00:08:07.425 NGUID/EUI64 Never Reused: No 00:08:07.425 Namespace Write Protected: No 00:08:07.425 Number of LBA Formats: 8 00:08:07.425 Current LBA Format: LBA Format #04 00:08:07.425 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:07.425 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:07.425 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:07.425 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:07.425 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:07.425 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:07.425 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:07.425 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:07.425 00:08:07.425 NVM Specific Namespace Data 00:08:07.425 =========================== 00:08:07.425 Logical Block Storage Tag Mask: 0 00:08:07.425 Protection Information Capabilities: 00:08:07.425 16b Guard Protection Information Storage Tag Support: No 00:08:07.425 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:07.425 Storage Tag Check Read Support: No 00:08:07.425 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.425 Namespace ID:3 00:08:07.425 Error Recovery Timeout: Unlimited 00:08:07.425 Command Set Identifier: NVM (00h) 00:08:07.425 Deallocate: Supported 00:08:07.425 Deallocated/Unwritten Error: Supported 00:08:07.425 Deallocated Read Value: All 0x00 00:08:07.425 Deallocate in Write Zeroes: Not Supported 00:08:07.425 Deallocated Guard Field: 0xFFFF 00:08:07.425 Flush: Supported 00:08:07.425 Reservation: Not Supported 00:08:07.426 Namespace Sharing Capabilities: Private 00:08:07.426 Size (in LBAs): 1048576 (4GiB) 00:08:07.426 Capacity (in LBAs): 1048576 (4GiB) 00:08:07.426 Utilization (in LBAs): 1048576 (4GiB) 00:08:07.426 Thin Provisioning: Not Supported 00:08:07.426 Per-NS Atomic Units: No 00:08:07.426 Maximum Single Source Range Length: 128 00:08:07.426 Maximum Copy Length: 128 00:08:07.426 Maximum Source Range Count: 128 00:08:07.426 NGUID/EUI64 Never Reused: No 00:08:07.426 Namespace Write Protected: No 00:08:07.426 Number of LBA Formats: 8 00:08:07.426 Current LBA Format: LBA Format #04 00:08:07.426 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:07.426 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:07.426 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:07.426 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:07.426 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:07.426 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:07.426 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:07.426 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:07.426 00:08:07.426 NVM Specific Namespace Data 00:08:07.426 =========================== 00:08:07.426 Logical Block Storage Tag Mask: 0 00:08:07.426 Protection Information Capabilities: 00:08:07.426 16b Guard Protection Information Storage Tag Support: No 00:08:07.426 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:07.426 Storage Tag Check Read Support: No 00:08:07.426 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.426 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.426 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.426 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.426 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.426 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.426 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.426 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.426 23:56:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:07.426 23:56:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:07.685 ===================================================== 00:08:07.685 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:07.685 ===================================================== 00:08:07.685 Controller Capabilities/Features 00:08:07.685 ================================ 00:08:07.685 Vendor ID: 1b36 00:08:07.685 Subsystem Vendor ID: 1af4 00:08:07.685 Serial Number: 12340 00:08:07.685 Model Number: QEMU NVMe Ctrl 00:08:07.685 Firmware Version: 8.0.0 00:08:07.685 Recommended Arb Burst: 6 00:08:07.685 IEEE OUI Identifier: 00 54 52 00:08:07.685 Multi-path I/O 00:08:07.685 May have multiple subsystem ports: No 00:08:07.685 May have multiple controllers: No 00:08:07.685 Associated with SR-IOV VF: No 00:08:07.685 Max Data Transfer Size: 524288 00:08:07.685 Max Number of Namespaces: 256 00:08:07.685 Max Number of I/O Queues: 64 00:08:07.685 NVMe Specification Version (VS): 1.4 00:08:07.685 NVMe Specification Version (Identify): 1.4 00:08:07.685 Maximum Queue Entries: 2048 00:08:07.685 Contiguous Queues Required: Yes 00:08:07.685 Arbitration Mechanisms Supported 00:08:07.685 Weighted Round Robin: Not Supported 00:08:07.685 Vendor Specific: Not Supported 00:08:07.685 Reset Timeout: 7500 ms 00:08:07.685 Doorbell Stride: 4 bytes 00:08:07.685 NVM Subsystem Reset: Not Supported 00:08:07.685 Command Sets Supported 00:08:07.685 NVM Command Set: Supported 00:08:07.685 Boot Partition: Not Supported 00:08:07.685 Memory Page Size Minimum: 4096 bytes 00:08:07.685 Memory Page Size Maximum: 65536 bytes 00:08:07.685 Persistent Memory Region: Not Supported 00:08:07.685 Optional Asynchronous Events Supported 00:08:07.685 Namespace Attribute Notices: Supported 00:08:07.685 Firmware Activation Notices: Not Supported 00:08:07.685 ANA Change Notices: Not Supported 00:08:07.685 PLE Aggregate Log Change Notices: Not Supported 00:08:07.685 LBA Status Info Alert Notices: Not Supported 00:08:07.685 EGE Aggregate Log Change Notices: Not Supported 00:08:07.685 Normal NVM Subsystem Shutdown event: Not Supported 00:08:07.685 Zone Descriptor Change Notices: Not Supported 00:08:07.685 Discovery Log Change Notices: Not Supported 00:08:07.685 Controller Attributes 00:08:07.685 128-bit Host Identifier: Not Supported 00:08:07.685 Non-Operational Permissive Mode: Not Supported 00:08:07.685 NVM Sets: Not Supported 00:08:07.685 Read Recovery Levels: Not Supported 00:08:07.685 Endurance Groups: Not Supported 00:08:07.685 Predictable Latency Mode: Not Supported 00:08:07.685 Traffic Based Keep ALive: Not Supported 00:08:07.685 Namespace Granularity: Not Supported 00:08:07.685 SQ Associations: Not Supported 00:08:07.685 UUID List: Not Supported 00:08:07.685 Multi-Domain Subsystem: Not Supported 00:08:07.685 Fixed Capacity Management: Not Supported 00:08:07.685 Variable Capacity Management: Not Supported 00:08:07.685 Delete Endurance Group: Not Supported 00:08:07.685 Delete NVM Set: Not Supported 00:08:07.685 Extended LBA Formats Supported: Supported 00:08:07.685 Flexible Data Placement Supported: Not Supported 00:08:07.685 00:08:07.685 Controller Memory Buffer Support 00:08:07.685 ================================ 00:08:07.685 Supported: No 00:08:07.685 00:08:07.685 Persistent Memory Region Support 00:08:07.685 ================================ 00:08:07.685 Supported: No 00:08:07.685 00:08:07.685 Admin Command Set Attributes 00:08:07.685 ============================ 00:08:07.685 Security Send/Receive: Not Supported 00:08:07.685 Format NVM: Supported 00:08:07.685 Firmware Activate/Download: Not Supported 00:08:07.685 Namespace Management: Supported 00:08:07.685 Device Self-Test: Not Supported 00:08:07.685 Directives: Supported 00:08:07.685 NVMe-MI: Not Supported 00:08:07.685 Virtualization Management: Not Supported 00:08:07.685 Doorbell Buffer Config: Supported 00:08:07.685 Get LBA Status Capability: Not Supported 00:08:07.685 Command & Feature Lockdown Capability: Not Supported 00:08:07.685 Abort Command Limit: 4 00:08:07.685 Async Event Request Limit: 4 00:08:07.685 Number of Firmware Slots: N/A 00:08:07.685 Firmware Slot 1 Read-Only: N/A 00:08:07.685 Firmware Activation Without Reset: N/A 00:08:07.685 Multiple Update Detection Support: N/A 00:08:07.685 Firmware Update Granularity: No Information Provided 00:08:07.685 Per-Namespace SMART Log: Yes 00:08:07.685 Asymmetric Namespace Access Log Page: Not Supported 00:08:07.685 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:07.685 Command Effects Log Page: Supported 00:08:07.685 Get Log Page Extended Data: Supported 00:08:07.685 Telemetry Log Pages: Not Supported 00:08:07.685 Persistent Event Log Pages: Not Supported 00:08:07.685 Supported Log Pages Log Page: May Support 00:08:07.685 Commands Supported & Effects Log Page: Not Supported 00:08:07.685 Feature Identifiers & Effects Log Page:May Support 00:08:07.686 NVMe-MI Commands & Effects Log Page: May Support 00:08:07.686 Data Area 4 for Telemetry Log: Not Supported 00:08:07.686 Error Log Page Entries Supported: 1 00:08:07.686 Keep Alive: Not Supported 00:08:07.686 00:08:07.686 NVM Command Set Attributes 00:08:07.686 ========================== 00:08:07.686 Submission Queue Entry Size 00:08:07.686 Max: 64 00:08:07.686 Min: 64 00:08:07.686 Completion Queue Entry Size 00:08:07.686 Max: 16 00:08:07.686 Min: 16 00:08:07.686 Number of Namespaces: 256 00:08:07.686 Compare Command: Supported 00:08:07.686 Write Uncorrectable Command: Not Supported 00:08:07.686 Dataset Management Command: Supported 00:08:07.686 Write Zeroes Command: Supported 00:08:07.686 Set Features Save Field: Supported 00:08:07.686 Reservations: Not Supported 00:08:07.686 Timestamp: Supported 00:08:07.686 Copy: Supported 00:08:07.686 Volatile Write Cache: Present 00:08:07.686 Atomic Write Unit (Normal): 1 00:08:07.686 Atomic Write Unit (PFail): 1 00:08:07.686 Atomic Compare & Write Unit: 1 00:08:07.686 Fused Compare & Write: Not Supported 00:08:07.686 Scatter-Gather List 00:08:07.686 SGL Command Set: Supported 00:08:07.686 SGL Keyed: Not Supported 00:08:07.686 SGL Bit Bucket Descriptor: Not Supported 00:08:07.686 SGL Metadata Pointer: Not Supported 00:08:07.686 Oversized SGL: Not Supported 00:08:07.686 SGL Metadata Address: Not Supported 00:08:07.686 SGL Offset: Not Supported 00:08:07.686 Transport SGL Data Block: Not Supported 00:08:07.686 Replay Protected Memory Block: Not Supported 00:08:07.686 00:08:07.686 Firmware Slot Information 00:08:07.686 ========================= 00:08:07.686 Active slot: 1 00:08:07.686 Slot 1 Firmware Revision: 1.0 00:08:07.686 00:08:07.686 00:08:07.686 Commands Supported and Effects 00:08:07.686 ============================== 00:08:07.686 Admin Commands 00:08:07.686 -------------- 00:08:07.686 Delete I/O Submission Queue (00h): Supported 00:08:07.686 Create I/O Submission Queue (01h): Supported 00:08:07.686 Get Log Page (02h): Supported 00:08:07.686 Delete I/O Completion Queue (04h): Supported 00:08:07.686 Create I/O Completion Queue (05h): Supported 00:08:07.686 Identify (06h): Supported 00:08:07.686 Abort (08h): Supported 00:08:07.686 Set Features (09h): Supported 00:08:07.686 Get Features (0Ah): Supported 00:08:07.686 Asynchronous Event Request (0Ch): Supported 00:08:07.686 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:07.686 Directive Send (19h): Supported 00:08:07.686 Directive Receive (1Ah): Supported 00:08:07.686 Virtualization Management (1Ch): Supported 00:08:07.686 Doorbell Buffer Config (7Ch): Supported 00:08:07.686 Format NVM (80h): Supported LBA-Change 00:08:07.686 I/O Commands 00:08:07.686 ------------ 00:08:07.686 Flush (00h): Supported LBA-Change 00:08:07.686 Write (01h): Supported LBA-Change 00:08:07.686 Read (02h): Supported 00:08:07.686 Compare (05h): Supported 00:08:07.686 Write Zeroes (08h): Supported LBA-Change 00:08:07.686 Dataset Management (09h): Supported LBA-Change 00:08:07.686 Unknown (0Ch): Supported 00:08:07.686 Unknown (12h): Supported 00:08:07.686 Copy (19h): Supported LBA-Change 00:08:07.686 Unknown (1Dh): Supported LBA-Change 00:08:07.686 00:08:07.686 Error Log 00:08:07.686 ========= 00:08:07.686 00:08:07.686 Arbitration 00:08:07.686 =========== 00:08:07.686 Arbitration Burst: no limit 00:08:07.686 00:08:07.686 Power Management 00:08:07.686 ================ 00:08:07.686 Number of Power States: 1 00:08:07.686 Current Power State: Power State #0 00:08:07.686 Power State #0: 00:08:07.686 Max Power: 25.00 W 00:08:07.686 Non-Operational State: Operational 00:08:07.686 Entry Latency: 16 microseconds 00:08:07.686 Exit Latency: 4 microseconds 00:08:07.686 Relative Read Throughput: 0 00:08:07.686 Relative Read Latency: 0 00:08:07.686 Relative Write Throughput: 0 00:08:07.686 Relative Write Latency: 0 00:08:07.686 Idle Power: Not Reported 00:08:07.686 Active Power: Not Reported 00:08:07.686 Non-Operational Permissive Mode: Not Supported 00:08:07.686 00:08:07.686 Health Information 00:08:07.686 ================== 00:08:07.686 Critical Warnings: 00:08:07.686 Available Spare Space: OK 00:08:07.686 Temperature: OK 00:08:07.686 Device Reliability: OK 00:08:07.686 Read Only: No 00:08:07.686 Volatile Memory Backup: OK 00:08:07.686 Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.686 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:07.686 Available Spare: 0% 00:08:07.686 Available Spare Threshold: 0% 00:08:07.686 Life Percentage Used: 0% 00:08:07.686 Data Units Read: 683 00:08:07.686 Data Units Written: 611 00:08:07.686 Host Read Commands: 36865 00:08:07.686 Host Write Commands: 36651 00:08:07.686 Controller Busy Time: 0 minutes 00:08:07.686 Power Cycles: 0 00:08:07.686 Power On Hours: 0 hours 00:08:07.686 Unsafe Shutdowns: 0 00:08:07.686 Unrecoverable Media Errors: 0 00:08:07.686 Lifetime Error Log Entries: 0 00:08:07.686 Warning Temperature Time: 0 minutes 00:08:07.686 Critical Temperature Time: 0 minutes 00:08:07.686 00:08:07.686 Number of Queues 00:08:07.686 ================ 00:08:07.686 Number of I/O Submission Queues: 64 00:08:07.686 Number of I/O Completion Queues: 64 00:08:07.686 00:08:07.686 ZNS Specific Controller Data 00:08:07.686 ============================ 00:08:07.686 Zone Append Size Limit: 0 00:08:07.686 00:08:07.686 00:08:07.686 Active Namespaces 00:08:07.686 ================= 00:08:07.686 Namespace ID:1 00:08:07.686 Error Recovery Timeout: Unlimited 00:08:07.686 Command Set Identifier: NVM (00h) 00:08:07.686 Deallocate: Supported 00:08:07.686 Deallocated/Unwritten Error: Supported 00:08:07.686 Deallocated Read Value: All 0x00 00:08:07.686 Deallocate in Write Zeroes: Not Supported 00:08:07.686 Deallocated Guard Field: 0xFFFF 00:08:07.686 Flush: Supported 00:08:07.686 Reservation: Not Supported 00:08:07.686 Metadata Transferred as: Separate Metadata Buffer 00:08:07.686 Namespace Sharing Capabilities: Private 00:08:07.686 Size (in LBAs): 1548666 (5GiB) 00:08:07.686 Capacity (in LBAs): 1548666 (5GiB) 00:08:07.686 Utilization (in LBAs): 1548666 (5GiB) 00:08:07.686 Thin Provisioning: Not Supported 00:08:07.686 Per-NS Atomic Units: No 00:08:07.686 Maximum Single Source Range Length: 128 00:08:07.686 Maximum Copy Length: 128 00:08:07.686 Maximum Source Range Count: 128 00:08:07.686 NGUID/EUI64 Never Reused: No 00:08:07.686 Namespace Write Protected: No 00:08:07.686 Number of LBA Formats: 8 00:08:07.686 Current LBA Format: LBA Format #07 00:08:07.686 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:07.686 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:07.686 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:07.686 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:07.686 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:07.686 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:07.686 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:07.686 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:07.686 00:08:07.686 NVM Specific Namespace Data 00:08:07.686 =========================== 00:08:07.686 Logical Block Storage Tag Mask: 0 00:08:07.686 Protection Information Capabilities: 00:08:07.686 16b Guard Protection Information Storage Tag Support: No 00:08:07.686 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:07.686 Storage Tag Check Read Support: No 00:08:07.686 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.686 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.686 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.686 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.686 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.686 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.686 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.686 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.686 23:56:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:07.686 23:56:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:07.686 ===================================================== 00:08:07.686 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:07.686 ===================================================== 00:08:07.686 Controller Capabilities/Features 00:08:07.686 ================================ 00:08:07.686 Vendor ID: 1b36 00:08:07.686 Subsystem Vendor ID: 1af4 00:08:07.686 Serial Number: 12341 00:08:07.686 Model Number: QEMU NVMe Ctrl 00:08:07.687 Firmware Version: 8.0.0 00:08:07.687 Recommended Arb Burst: 6 00:08:07.687 IEEE OUI Identifier: 00 54 52 00:08:07.687 Multi-path I/O 00:08:07.687 May have multiple subsystem ports: No 00:08:07.687 May have multiple controllers: No 00:08:07.687 Associated with SR-IOV VF: No 00:08:07.687 Max Data Transfer Size: 524288 00:08:07.687 Max Number of Namespaces: 256 00:08:07.687 Max Number of I/O Queues: 64 00:08:07.687 NVMe Specification Version (VS): 1.4 00:08:07.687 NVMe Specification Version (Identify): 1.4 00:08:07.687 Maximum Queue Entries: 2048 00:08:07.687 Contiguous Queues Required: Yes 00:08:07.687 Arbitration Mechanisms Supported 00:08:07.687 Weighted Round Robin: Not Supported 00:08:07.687 Vendor Specific: Not Supported 00:08:07.687 Reset Timeout: 7500 ms 00:08:07.687 Doorbell Stride: 4 bytes 00:08:07.687 NVM Subsystem Reset: Not Supported 00:08:07.687 Command Sets Supported 00:08:07.687 NVM Command Set: Supported 00:08:07.687 Boot Partition: Not Supported 00:08:07.687 Memory Page Size Minimum: 4096 bytes 00:08:07.687 Memory Page Size Maximum: 65536 bytes 00:08:07.687 Persistent Memory Region: Not Supported 00:08:07.687 Optional Asynchronous Events Supported 00:08:07.687 Namespace Attribute Notices: Supported 00:08:07.687 Firmware Activation Notices: Not Supported 00:08:07.687 ANA Change Notices: Not Supported 00:08:07.687 PLE Aggregate Log Change Notices: Not Supported 00:08:07.687 LBA Status Info Alert Notices: Not Supported 00:08:07.687 EGE Aggregate Log Change Notices: Not Supported 00:08:07.687 Normal NVM Subsystem Shutdown event: Not Supported 00:08:07.687 Zone Descriptor Change Notices: Not Supported 00:08:07.687 Discovery Log Change Notices: Not Supported 00:08:07.687 Controller Attributes 00:08:07.687 128-bit Host Identifier: Not Supported 00:08:07.687 Non-Operational Permissive Mode: Not Supported 00:08:07.687 NVM Sets: Not Supported 00:08:07.687 Read Recovery Levels: Not Supported 00:08:07.687 Endurance Groups: Not Supported 00:08:07.687 Predictable Latency Mode: Not Supported 00:08:07.687 Traffic Based Keep ALive: Not Supported 00:08:07.687 Namespace Granularity: Not Supported 00:08:07.687 SQ Associations: Not Supported 00:08:07.687 UUID List: Not Supported 00:08:07.687 Multi-Domain Subsystem: Not Supported 00:08:07.687 Fixed Capacity Management: Not Supported 00:08:07.687 Variable Capacity Management: Not Supported 00:08:07.687 Delete Endurance Group: Not Supported 00:08:07.687 Delete NVM Set: Not Supported 00:08:07.687 Extended LBA Formats Supported: Supported 00:08:07.687 Flexible Data Placement Supported: Not Supported 00:08:07.687 00:08:07.687 Controller Memory Buffer Support 00:08:07.687 ================================ 00:08:07.687 Supported: No 00:08:07.687 00:08:07.687 Persistent Memory Region Support 00:08:07.687 ================================ 00:08:07.687 Supported: No 00:08:07.687 00:08:07.687 Admin Command Set Attributes 00:08:07.687 ============================ 00:08:07.687 Security Send/Receive: Not Supported 00:08:07.687 Format NVM: Supported 00:08:07.687 Firmware Activate/Download: Not Supported 00:08:07.687 Namespace Management: Supported 00:08:07.687 Device Self-Test: Not Supported 00:08:07.687 Directives: Supported 00:08:07.687 NVMe-MI: Not Supported 00:08:07.687 Virtualization Management: Not Supported 00:08:07.687 Doorbell Buffer Config: Supported 00:08:07.687 Get LBA Status Capability: Not Supported 00:08:07.687 Command & Feature Lockdown Capability: Not Supported 00:08:07.687 Abort Command Limit: 4 00:08:07.687 Async Event Request Limit: 4 00:08:07.687 Number of Firmware Slots: N/A 00:08:07.687 Firmware Slot 1 Read-Only: N/A 00:08:07.687 Firmware Activation Without Reset: N/A 00:08:07.687 Multiple Update Detection Support: N/A 00:08:07.687 Firmware Update Granularity: No Information Provided 00:08:07.687 Per-Namespace SMART Log: Yes 00:08:07.687 Asymmetric Namespace Access Log Page: Not Supported 00:08:07.687 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:07.687 Command Effects Log Page: Supported 00:08:07.687 Get Log Page Extended Data: Supported 00:08:07.687 Telemetry Log Pages: Not Supported 00:08:07.687 Persistent Event Log Pages: Not Supported 00:08:07.687 Supported Log Pages Log Page: May Support 00:08:07.687 Commands Supported & Effects Log Page: Not Supported 00:08:07.687 Feature Identifiers & Effects Log Page:May Support 00:08:07.687 NVMe-MI Commands & Effects Log Page: May Support 00:08:07.687 Data Area 4 for Telemetry Log: Not Supported 00:08:07.687 Error Log Page Entries Supported: 1 00:08:07.687 Keep Alive: Not Supported 00:08:07.687 00:08:07.687 NVM Command Set Attributes 00:08:07.687 ========================== 00:08:07.687 Submission Queue Entry Size 00:08:07.687 Max: 64 00:08:07.687 Min: 64 00:08:07.687 Completion Queue Entry Size 00:08:07.687 Max: 16 00:08:07.687 Min: 16 00:08:07.687 Number of Namespaces: 256 00:08:07.687 Compare Command: Supported 00:08:07.687 Write Uncorrectable Command: Not Supported 00:08:07.687 Dataset Management Command: Supported 00:08:07.687 Write Zeroes Command: Supported 00:08:07.687 Set Features Save Field: Supported 00:08:07.687 Reservations: Not Supported 00:08:07.687 Timestamp: Supported 00:08:07.687 Copy: Supported 00:08:07.687 Volatile Write Cache: Present 00:08:07.687 Atomic Write Unit (Normal): 1 00:08:07.687 Atomic Write Unit (PFail): 1 00:08:07.687 Atomic Compare & Write Unit: 1 00:08:07.687 Fused Compare & Write: Not Supported 00:08:07.687 Scatter-Gather List 00:08:07.687 SGL Command Set: Supported 00:08:07.687 SGL Keyed: Not Supported 00:08:07.687 SGL Bit Bucket Descriptor: Not Supported 00:08:07.687 SGL Metadata Pointer: Not Supported 00:08:07.687 Oversized SGL: Not Supported 00:08:07.687 SGL Metadata Address: Not Supported 00:08:07.687 SGL Offset: Not Supported 00:08:07.687 Transport SGL Data Block: Not Supported 00:08:07.687 Replay Protected Memory Block: Not Supported 00:08:07.687 00:08:07.687 Firmware Slot Information 00:08:07.687 ========================= 00:08:07.687 Active slot: 1 00:08:07.687 Slot 1 Firmware Revision: 1.0 00:08:07.687 00:08:07.687 00:08:07.687 Commands Supported and Effects 00:08:07.687 ============================== 00:08:07.687 Admin Commands 00:08:07.687 -------------- 00:08:07.687 Delete I/O Submission Queue (00h): Supported 00:08:07.687 Create I/O Submission Queue (01h): Supported 00:08:07.687 Get Log Page (02h): Supported 00:08:07.687 Delete I/O Completion Queue (04h): Supported 00:08:07.687 Create I/O Completion Queue (05h): Supported 00:08:07.687 Identify (06h): Supported 00:08:07.687 Abort (08h): Supported 00:08:07.687 Set Features (09h): Supported 00:08:07.687 Get Features (0Ah): Supported 00:08:07.687 Asynchronous Event Request (0Ch): Supported 00:08:07.687 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:07.687 Directive Send (19h): Supported 00:08:07.687 Directive Receive (1Ah): Supported 00:08:07.687 Virtualization Management (1Ch): Supported 00:08:07.687 Doorbell Buffer Config (7Ch): Supported 00:08:07.687 Format NVM (80h): Supported LBA-Change 00:08:07.687 I/O Commands 00:08:07.687 ------------ 00:08:07.687 Flush (00h): Supported LBA-Change 00:08:07.687 Write (01h): Supported LBA-Change 00:08:07.687 Read (02h): Supported 00:08:07.687 Compare (05h): Supported 00:08:07.687 Write Zeroes (08h): Supported LBA-Change 00:08:07.687 Dataset Management (09h): Supported LBA-Change 00:08:07.687 Unknown (0Ch): Supported 00:08:07.687 Unknown (12h): Supported 00:08:07.687 Copy (19h): Supported LBA-Change 00:08:07.687 Unknown (1Dh): Supported LBA-Change 00:08:07.687 00:08:07.687 Error Log 00:08:07.687 ========= 00:08:07.687 00:08:07.687 Arbitration 00:08:07.687 =========== 00:08:07.687 Arbitration Burst: no limit 00:08:07.687 00:08:07.687 Power Management 00:08:07.687 ================ 00:08:07.687 Number of Power States: 1 00:08:07.687 Current Power State: Power State #0 00:08:07.687 Power State #0: 00:08:07.687 Max Power: 25.00 W 00:08:07.687 Non-Operational State: Operational 00:08:07.687 Entry Latency: 16 microseconds 00:08:07.687 Exit Latency: 4 microseconds 00:08:07.687 Relative Read Throughput: 0 00:08:07.687 Relative Read Latency: 0 00:08:07.687 Relative Write Throughput: 0 00:08:07.687 Relative Write Latency: 0 00:08:07.687 Idle Power: Not Reported 00:08:07.687 Active Power: Not Reported 00:08:07.687 Non-Operational Permissive Mode: Not Supported 00:08:07.687 00:08:07.687 Health Information 00:08:07.687 ================== 00:08:07.687 Critical Warnings: 00:08:07.687 Available Spare Space: OK 00:08:07.687 Temperature: OK 00:08:07.687 Device Reliability: OK 00:08:07.688 Read Only: No 00:08:07.688 Volatile Memory Backup: OK 00:08:07.688 Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.688 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:07.688 Available Spare: 0% 00:08:07.688 Available Spare Threshold: 0% 00:08:07.688 Life Percentage Used: 0% 00:08:07.688 Data Units Read: 1053 00:08:07.688 Data Units Written: 926 00:08:07.688 Host Read Commands: 54913 00:08:07.688 Host Write Commands: 53802 00:08:07.688 Controller Busy Time: 0 minutes 00:08:07.688 Power Cycles: 0 00:08:07.688 Power On Hours: 0 hours 00:08:07.688 Unsafe Shutdowns: 0 00:08:07.688 Unrecoverable Media Errors: 0 00:08:07.688 Lifetime Error Log Entries: 0 00:08:07.688 Warning Temperature Time: 0 minutes 00:08:07.688 Critical Temperature Time: 0 minutes 00:08:07.688 00:08:07.688 Number of Queues 00:08:07.688 ================ 00:08:07.688 Number of I/O Submission Queues: 64 00:08:07.688 Number of I/O Completion Queues: 64 00:08:07.688 00:08:07.688 ZNS Specific Controller Data 00:08:07.688 ============================ 00:08:07.688 Zone Append Size Limit: 0 00:08:07.688 00:08:07.688 00:08:07.688 Active Namespaces 00:08:07.688 ================= 00:08:07.688 Namespace ID:1 00:08:07.688 Error Recovery Timeout: Unlimited 00:08:07.688 Command Set Identifier: NVM (00h) 00:08:07.688 Deallocate: Supported 00:08:07.688 Deallocated/Unwritten Error: Supported 00:08:07.688 Deallocated Read Value: All 0x00 00:08:07.688 Deallocate in Write Zeroes: Not Supported 00:08:07.688 Deallocated Guard Field: 0xFFFF 00:08:07.688 Flush: Supported 00:08:07.688 Reservation: Not Supported 00:08:07.688 Namespace Sharing Capabilities: Private 00:08:07.688 Size (in LBAs): 1310720 (5GiB) 00:08:07.688 Capacity (in LBAs): 1310720 (5GiB) 00:08:07.688 Utilization (in LBAs): 1310720 (5GiB) 00:08:07.688 Thin Provisioning: Not Supported 00:08:07.688 Per-NS Atomic Units: No 00:08:07.688 Maximum Single Source Range Length: 128 00:08:07.688 Maximum Copy Length: 128 00:08:07.688 Maximum Source Range Count: 128 00:08:07.688 NGUID/EUI64 Never Reused: No 00:08:07.688 Namespace Write Protected: No 00:08:07.688 Number of LBA Formats: 8 00:08:07.688 Current LBA Format: LBA Format #04 00:08:07.688 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:07.688 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:07.688 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:07.688 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:07.688 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:07.688 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:07.688 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:07.688 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:07.688 00:08:07.688 NVM Specific Namespace Data 00:08:07.688 =========================== 00:08:07.688 Logical Block Storage Tag Mask: 0 00:08:07.688 Protection Information Capabilities: 00:08:07.688 16b Guard Protection Information Storage Tag Support: No 00:08:07.688 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:07.688 Storage Tag Check Read Support: No 00:08:07.688 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.688 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.688 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.688 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.688 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.688 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.688 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.688 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.688 23:56:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:07.688 23:56:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:07.946 ===================================================== 00:08:07.946 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:07.946 ===================================================== 00:08:07.946 Controller Capabilities/Features 00:08:07.946 ================================ 00:08:07.946 Vendor ID: 1b36 00:08:07.946 Subsystem Vendor ID: 1af4 00:08:07.946 Serial Number: 12342 00:08:07.946 Model Number: QEMU NVMe Ctrl 00:08:07.946 Firmware Version: 8.0.0 00:08:07.947 Recommended Arb Burst: 6 00:08:07.947 IEEE OUI Identifier: 00 54 52 00:08:07.947 Multi-path I/O 00:08:07.947 May have multiple subsystem ports: No 00:08:07.947 May have multiple controllers: No 00:08:07.947 Associated with SR-IOV VF: No 00:08:07.947 Max Data Transfer Size: 524288 00:08:07.947 Max Number of Namespaces: 256 00:08:07.947 Max Number of I/O Queues: 64 00:08:07.947 NVMe Specification Version (VS): 1.4 00:08:07.947 NVMe Specification Version (Identify): 1.4 00:08:07.947 Maximum Queue Entries: 2048 00:08:07.947 Contiguous Queues Required: Yes 00:08:07.947 Arbitration Mechanisms Supported 00:08:07.947 Weighted Round Robin: Not Supported 00:08:07.947 Vendor Specific: Not Supported 00:08:07.947 Reset Timeout: 7500 ms 00:08:07.947 Doorbell Stride: 4 bytes 00:08:07.947 NVM Subsystem Reset: Not Supported 00:08:07.947 Command Sets Supported 00:08:07.947 NVM Command Set: Supported 00:08:07.947 Boot Partition: Not Supported 00:08:07.947 Memory Page Size Minimum: 4096 bytes 00:08:07.947 Memory Page Size Maximum: 65536 bytes 00:08:07.947 Persistent Memory Region: Not Supported 00:08:07.947 Optional Asynchronous Events Supported 00:08:07.947 Namespace Attribute Notices: Supported 00:08:07.947 Firmware Activation Notices: Not Supported 00:08:07.947 ANA Change Notices: Not Supported 00:08:07.947 PLE Aggregate Log Change Notices: Not Supported 00:08:07.947 LBA Status Info Alert Notices: Not Supported 00:08:07.947 EGE Aggregate Log Change Notices: Not Supported 00:08:07.947 Normal NVM Subsystem Shutdown event: Not Supported 00:08:07.947 Zone Descriptor Change Notices: Not Supported 00:08:07.947 Discovery Log Change Notices: Not Supported 00:08:07.947 Controller Attributes 00:08:07.947 128-bit Host Identifier: Not Supported 00:08:07.947 Non-Operational Permissive Mode: Not Supported 00:08:07.947 NVM Sets: Not Supported 00:08:07.947 Read Recovery Levels: Not Supported 00:08:07.947 Endurance Groups: Not Supported 00:08:07.947 Predictable Latency Mode: Not Supported 00:08:07.947 Traffic Based Keep ALive: Not Supported 00:08:07.947 Namespace Granularity: Not Supported 00:08:07.947 SQ Associations: Not Supported 00:08:07.947 UUID List: Not Supported 00:08:07.947 Multi-Domain Subsystem: Not Supported 00:08:07.947 Fixed Capacity Management: Not Supported 00:08:07.947 Variable Capacity Management: Not Supported 00:08:07.947 Delete Endurance Group: Not Supported 00:08:07.947 Delete NVM Set: Not Supported 00:08:07.947 Extended LBA Formats Supported: Supported 00:08:07.947 Flexible Data Placement Supported: Not Supported 00:08:07.947 00:08:07.947 Controller Memory Buffer Support 00:08:07.947 ================================ 00:08:07.947 Supported: No 00:08:07.947 00:08:07.947 Persistent Memory Region Support 00:08:07.947 ================================ 00:08:07.947 Supported: No 00:08:07.947 00:08:07.947 Admin Command Set Attributes 00:08:07.947 ============================ 00:08:07.947 Security Send/Receive: Not Supported 00:08:07.947 Format NVM: Supported 00:08:07.947 Firmware Activate/Download: Not Supported 00:08:07.947 Namespace Management: Supported 00:08:07.947 Device Self-Test: Not Supported 00:08:07.947 Directives: Supported 00:08:07.947 NVMe-MI: Not Supported 00:08:07.947 Virtualization Management: Not Supported 00:08:07.947 Doorbell Buffer Config: Supported 00:08:07.947 Get LBA Status Capability: Not Supported 00:08:07.947 Command & Feature Lockdown Capability: Not Supported 00:08:07.947 Abort Command Limit: 4 00:08:07.947 Async Event Request Limit: 4 00:08:07.947 Number of Firmware Slots: N/A 00:08:07.947 Firmware Slot 1 Read-Only: N/A 00:08:07.947 Firmware Activation Without Reset: N/A 00:08:07.947 Multiple Update Detection Support: N/A 00:08:07.947 Firmware Update Granularity: No Information Provided 00:08:07.947 Per-Namespace SMART Log: Yes 00:08:07.947 Asymmetric Namespace Access Log Page: Not Supported 00:08:07.947 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:07.947 Command Effects Log Page: Supported 00:08:07.947 Get Log Page Extended Data: Supported 00:08:07.947 Telemetry Log Pages: Not Supported 00:08:07.947 Persistent Event Log Pages: Not Supported 00:08:07.947 Supported Log Pages Log Page: May Support 00:08:07.947 Commands Supported & Effects Log Page: Not Supported 00:08:07.947 Feature Identifiers & Effects Log Page:May Support 00:08:07.947 NVMe-MI Commands & Effects Log Page: May Support 00:08:07.947 Data Area 4 for Telemetry Log: Not Supported 00:08:07.947 Error Log Page Entries Supported: 1 00:08:07.947 Keep Alive: Not Supported 00:08:07.947 00:08:07.947 NVM Command Set Attributes 00:08:07.947 ========================== 00:08:07.947 Submission Queue Entry Size 00:08:07.947 Max: 64 00:08:07.947 Min: 64 00:08:07.947 Completion Queue Entry Size 00:08:07.947 Max: 16 00:08:07.947 Min: 16 00:08:07.947 Number of Namespaces: 256 00:08:07.947 Compare Command: Supported 00:08:07.947 Write Uncorrectable Command: Not Supported 00:08:07.947 Dataset Management Command: Supported 00:08:07.947 Write Zeroes Command: Supported 00:08:07.947 Set Features Save Field: Supported 00:08:07.947 Reservations: Not Supported 00:08:07.947 Timestamp: Supported 00:08:07.947 Copy: Supported 00:08:07.947 Volatile Write Cache: Present 00:08:07.947 Atomic Write Unit (Normal): 1 00:08:07.947 Atomic Write Unit (PFail): 1 00:08:07.947 Atomic Compare & Write Unit: 1 00:08:07.947 Fused Compare & Write: Not Supported 00:08:07.947 Scatter-Gather List 00:08:07.947 SGL Command Set: Supported 00:08:07.947 SGL Keyed: Not Supported 00:08:07.947 SGL Bit Bucket Descriptor: Not Supported 00:08:07.947 SGL Metadata Pointer: Not Supported 00:08:07.947 Oversized SGL: Not Supported 00:08:07.947 SGL Metadata Address: Not Supported 00:08:07.947 SGL Offset: Not Supported 00:08:07.947 Transport SGL Data Block: Not Supported 00:08:07.947 Replay Protected Memory Block: Not Supported 00:08:07.947 00:08:07.947 Firmware Slot Information 00:08:07.947 ========================= 00:08:07.947 Active slot: 1 00:08:07.947 Slot 1 Firmware Revision: 1.0 00:08:07.947 00:08:07.947 00:08:07.947 Commands Supported and Effects 00:08:07.947 ============================== 00:08:07.947 Admin Commands 00:08:07.947 -------------- 00:08:07.947 Delete I/O Submission Queue (00h): Supported 00:08:07.947 Create I/O Submission Queue (01h): Supported 00:08:07.947 Get Log Page (02h): Supported 00:08:07.947 Delete I/O Completion Queue (04h): Supported 00:08:07.947 Create I/O Completion Queue (05h): Supported 00:08:07.947 Identify (06h): Supported 00:08:07.947 Abort (08h): Supported 00:08:07.947 Set Features (09h): Supported 00:08:07.947 Get Features (0Ah): Supported 00:08:07.947 Asynchronous Event Request (0Ch): Supported 00:08:07.947 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:07.947 Directive Send (19h): Supported 00:08:07.947 Directive Receive (1Ah): Supported 00:08:07.947 Virtualization Management (1Ch): Supported 00:08:07.947 Doorbell Buffer Config (7Ch): Supported 00:08:07.947 Format NVM (80h): Supported LBA-Change 00:08:07.947 I/O Commands 00:08:07.947 ------------ 00:08:07.947 Flush (00h): Supported LBA-Change 00:08:07.947 Write (01h): Supported LBA-Change 00:08:07.947 Read (02h): Supported 00:08:07.947 Compare (05h): Supported 00:08:07.947 Write Zeroes (08h): Supported LBA-Change 00:08:07.947 Dataset Management (09h): Supported LBA-Change 00:08:07.947 Unknown (0Ch): Supported 00:08:07.947 Unknown (12h): Supported 00:08:07.947 Copy (19h): Supported LBA-Change 00:08:07.947 Unknown (1Dh): Supported LBA-Change 00:08:07.947 00:08:07.947 Error Log 00:08:07.947 ========= 00:08:07.947 00:08:07.947 Arbitration 00:08:07.947 =========== 00:08:07.947 Arbitration Burst: no limit 00:08:07.947 00:08:07.947 Power Management 00:08:07.947 ================ 00:08:07.947 Number of Power States: 1 00:08:07.947 Current Power State: Power State #0 00:08:07.947 Power State #0: 00:08:07.947 Max Power: 25.00 W 00:08:07.947 Non-Operational State: Operational 00:08:07.947 Entry Latency: 16 microseconds 00:08:07.947 Exit Latency: 4 microseconds 00:08:07.947 Relative Read Throughput: 0 00:08:07.947 Relative Read Latency: 0 00:08:07.947 Relative Write Throughput: 0 00:08:07.947 Relative Write Latency: 0 00:08:07.947 Idle Power: Not Reported 00:08:07.947 Active Power: Not Reported 00:08:07.947 Non-Operational Permissive Mode: Not Supported 00:08:07.947 00:08:07.947 Health Information 00:08:07.947 ================== 00:08:07.947 Critical Warnings: 00:08:07.947 Available Spare Space: OK 00:08:07.947 Temperature: OK 00:08:07.947 Device Reliability: OK 00:08:07.947 Read Only: No 00:08:07.947 Volatile Memory Backup: OK 00:08:07.947 Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.947 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:07.947 Available Spare: 0% 00:08:07.948 Available Spare Threshold: 0% 00:08:07.948 Life Percentage Used: 0% 00:08:07.948 Data Units Read: 2242 00:08:07.948 Data Units Written: 2029 00:08:07.948 Host Read Commands: 112727 00:08:07.948 Host Write Commands: 110996 00:08:07.948 Controller Busy Time: 0 minutes 00:08:07.948 Power Cycles: 0 00:08:07.948 Power On Hours: 0 hours 00:08:07.948 Unsafe Shutdowns: 0 00:08:07.948 Unrecoverable Media Errors: 0 00:08:07.948 Lifetime Error Log Entries: 0 00:08:07.948 Warning Temperature Time: 0 minutes 00:08:07.948 Critical Temperature Time: 0 minutes 00:08:07.948 00:08:07.948 Number of Queues 00:08:07.948 ================ 00:08:07.948 Number of I/O Submission Queues: 64 00:08:07.948 Number of I/O Completion Queues: 64 00:08:07.948 00:08:07.948 ZNS Specific Controller Data 00:08:07.948 ============================ 00:08:07.948 Zone Append Size Limit: 0 00:08:07.948 00:08:07.948 00:08:07.948 Active Namespaces 00:08:07.948 ================= 00:08:07.948 Namespace ID:1 00:08:07.948 Error Recovery Timeout: Unlimited 00:08:07.948 Command Set Identifier: NVM (00h) 00:08:07.948 Deallocate: Supported 00:08:07.948 Deallocated/Unwritten Error: Supported 00:08:07.948 Deallocated Read Value: All 0x00 00:08:07.948 Deallocate in Write Zeroes: Not Supported 00:08:07.948 Deallocated Guard Field: 0xFFFF 00:08:07.948 Flush: Supported 00:08:07.948 Reservation: Not Supported 00:08:07.948 Namespace Sharing Capabilities: Private 00:08:07.948 Size (in LBAs): 1048576 (4GiB) 00:08:07.948 Capacity (in LBAs): 1048576 (4GiB) 00:08:07.948 Utilization (in LBAs): 1048576 (4GiB) 00:08:07.948 Thin Provisioning: Not Supported 00:08:07.948 Per-NS Atomic Units: No 00:08:07.948 Maximum Single Source Range Length: 128 00:08:07.948 Maximum Copy Length: 128 00:08:07.948 Maximum Source Range Count: 128 00:08:07.948 NGUID/EUI64 Never Reused: No 00:08:07.948 Namespace Write Protected: No 00:08:07.948 Number of LBA Formats: 8 00:08:07.948 Current LBA Format: LBA Format #04 00:08:07.948 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:07.948 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:07.948 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:07.948 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:07.948 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:07.948 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:07.948 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:07.948 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:07.948 00:08:07.948 NVM Specific Namespace Data 00:08:07.948 =========================== 00:08:07.948 Logical Block Storage Tag Mask: 0 00:08:07.948 Protection Information Capabilities: 00:08:07.948 16b Guard Protection Information Storage Tag Support: No 00:08:07.948 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:07.948 Storage Tag Check Read Support: No 00:08:07.948 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Namespace ID:2 00:08:07.948 Error Recovery Timeout: Unlimited 00:08:07.948 Command Set Identifier: NVM (00h) 00:08:07.948 Deallocate: Supported 00:08:07.948 Deallocated/Unwritten Error: Supported 00:08:07.948 Deallocated Read Value: All 0x00 00:08:07.948 Deallocate in Write Zeroes: Not Supported 00:08:07.948 Deallocated Guard Field: 0xFFFF 00:08:07.948 Flush: Supported 00:08:07.948 Reservation: Not Supported 00:08:07.948 Namespace Sharing Capabilities: Private 00:08:07.948 Size (in LBAs): 1048576 (4GiB) 00:08:07.948 Capacity (in LBAs): 1048576 (4GiB) 00:08:07.948 Utilization (in LBAs): 1048576 (4GiB) 00:08:07.948 Thin Provisioning: Not Supported 00:08:07.948 Per-NS Atomic Units: No 00:08:07.948 Maximum Single Source Range Length: 128 00:08:07.948 Maximum Copy Length: 128 00:08:07.948 Maximum Source Range Count: 128 00:08:07.948 NGUID/EUI64 Never Reused: No 00:08:07.948 Namespace Write Protected: No 00:08:07.948 Number of LBA Formats: 8 00:08:07.948 Current LBA Format: LBA Format #04 00:08:07.948 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:07.948 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:07.948 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:07.948 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:07.948 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:07.948 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:07.948 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:07.948 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:07.948 00:08:07.948 NVM Specific Namespace Data 00:08:07.948 =========================== 00:08:07.948 Logical Block Storage Tag Mask: 0 00:08:07.948 Protection Information Capabilities: 00:08:07.948 16b Guard Protection Information Storage Tag Support: No 00:08:07.948 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:07.948 Storage Tag Check Read Support: No 00:08:07.948 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Namespace ID:3 00:08:07.948 Error Recovery Timeout: Unlimited 00:08:07.948 Command Set Identifier: NVM (00h) 00:08:07.948 Deallocate: Supported 00:08:07.948 Deallocated/Unwritten Error: Supported 00:08:07.948 Deallocated Read Value: All 0x00 00:08:07.948 Deallocate in Write Zeroes: Not Supported 00:08:07.948 Deallocated Guard Field: 0xFFFF 00:08:07.948 Flush: Supported 00:08:07.948 Reservation: Not Supported 00:08:07.948 Namespace Sharing Capabilities: Private 00:08:07.948 Size (in LBAs): 1048576 (4GiB) 00:08:07.948 Capacity (in LBAs): 1048576 (4GiB) 00:08:07.948 Utilization (in LBAs): 1048576 (4GiB) 00:08:07.948 Thin Provisioning: Not Supported 00:08:07.948 Per-NS Atomic Units: No 00:08:07.948 Maximum Single Source Range Length: 128 00:08:07.948 Maximum Copy Length: 128 00:08:07.948 Maximum Source Range Count: 128 00:08:07.948 NGUID/EUI64 Never Reused: No 00:08:07.948 Namespace Write Protected: No 00:08:07.948 Number of LBA Formats: 8 00:08:07.948 Current LBA Format: LBA Format #04 00:08:07.948 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:07.948 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:07.948 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:07.948 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:07.948 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:07.948 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:07.948 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:07.948 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:07.948 00:08:07.948 NVM Specific Namespace Data 00:08:07.948 =========================== 00:08:07.948 Logical Block Storage Tag Mask: 0 00:08:07.948 Protection Information Capabilities: 00:08:07.948 16b Guard Protection Information Storage Tag Support: No 00:08:07.948 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:07.948 Storage Tag Check Read Support: No 00:08:07.948 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:07.948 23:56:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:07.948 23:56:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:08.208 ===================================================== 00:08:08.208 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:08.208 ===================================================== 00:08:08.208 Controller Capabilities/Features 00:08:08.208 ================================ 00:08:08.208 Vendor ID: 1b36 00:08:08.208 Subsystem Vendor ID: 1af4 00:08:08.208 Serial Number: 12343 00:08:08.208 Model Number: QEMU NVMe Ctrl 00:08:08.208 Firmware Version: 8.0.0 00:08:08.208 Recommended Arb Burst: 6 00:08:08.208 IEEE OUI Identifier: 00 54 52 00:08:08.208 Multi-path I/O 00:08:08.208 May have multiple subsystem ports: No 00:08:08.208 May have multiple controllers: Yes 00:08:08.208 Associated with SR-IOV VF: No 00:08:08.208 Max Data Transfer Size: 524288 00:08:08.208 Max Number of Namespaces: 256 00:08:08.208 Max Number of I/O Queues: 64 00:08:08.208 NVMe Specification Version (VS): 1.4 00:08:08.208 NVMe Specification Version (Identify): 1.4 00:08:08.208 Maximum Queue Entries: 2048 00:08:08.208 Contiguous Queues Required: Yes 00:08:08.208 Arbitration Mechanisms Supported 00:08:08.208 Weighted Round Robin: Not Supported 00:08:08.208 Vendor Specific: Not Supported 00:08:08.208 Reset Timeout: 7500 ms 00:08:08.208 Doorbell Stride: 4 bytes 00:08:08.208 NVM Subsystem Reset: Not Supported 00:08:08.208 Command Sets Supported 00:08:08.208 NVM Command Set: Supported 00:08:08.208 Boot Partition: Not Supported 00:08:08.208 Memory Page Size Minimum: 4096 bytes 00:08:08.208 Memory Page Size Maximum: 65536 bytes 00:08:08.208 Persistent Memory Region: Not Supported 00:08:08.208 Optional Asynchronous Events Supported 00:08:08.208 Namespace Attribute Notices: Supported 00:08:08.208 Firmware Activation Notices: Not Supported 00:08:08.208 ANA Change Notices: Not Supported 00:08:08.208 PLE Aggregate Log Change Notices: Not Supported 00:08:08.208 LBA Status Info Alert Notices: Not Supported 00:08:08.208 EGE Aggregate Log Change Notices: Not Supported 00:08:08.208 Normal NVM Subsystem Shutdown event: Not Supported 00:08:08.208 Zone Descriptor Change Notices: Not Supported 00:08:08.208 Discovery Log Change Notices: Not Supported 00:08:08.208 Controller Attributes 00:08:08.208 128-bit Host Identifier: Not Supported 00:08:08.208 Non-Operational Permissive Mode: Not Supported 00:08:08.208 NVM Sets: Not Supported 00:08:08.208 Read Recovery Levels: Not Supported 00:08:08.208 Endurance Groups: Supported 00:08:08.208 Predictable Latency Mode: Not Supported 00:08:08.208 Traffic Based Keep ALive: Not Supported 00:08:08.208 Namespace Granularity: Not Supported 00:08:08.208 SQ Associations: Not Supported 00:08:08.208 UUID List: Not Supported 00:08:08.208 Multi-Domain Subsystem: Not Supported 00:08:08.208 Fixed Capacity Management: Not Supported 00:08:08.208 Variable Capacity Management: Not Supported 00:08:08.208 Delete Endurance Group: Not Supported 00:08:08.208 Delete NVM Set: Not Supported 00:08:08.208 Extended LBA Formats Supported: Supported 00:08:08.208 Flexible Data Placement Supported: Supported 00:08:08.208 00:08:08.209 Controller Memory Buffer Support 00:08:08.209 ================================ 00:08:08.209 Supported: No 00:08:08.209 00:08:08.209 Persistent Memory Region Support 00:08:08.209 ================================ 00:08:08.209 Supported: No 00:08:08.209 00:08:08.209 Admin Command Set Attributes 00:08:08.209 ============================ 00:08:08.209 Security Send/Receive: Not Supported 00:08:08.209 Format NVM: Supported 00:08:08.209 Firmware Activate/Download: Not Supported 00:08:08.209 Namespace Management: Supported 00:08:08.209 Device Self-Test: Not Supported 00:08:08.209 Directives: Supported 00:08:08.209 NVMe-MI: Not Supported 00:08:08.209 Virtualization Management: Not Supported 00:08:08.209 Doorbell Buffer Config: Supported 00:08:08.209 Get LBA Status Capability: Not Supported 00:08:08.209 Command & Feature Lockdown Capability: Not Supported 00:08:08.209 Abort Command Limit: 4 00:08:08.209 Async Event Request Limit: 4 00:08:08.209 Number of Firmware Slots: N/A 00:08:08.209 Firmware Slot 1 Read-Only: N/A 00:08:08.209 Firmware Activation Without Reset: N/A 00:08:08.209 Multiple Update Detection Support: N/A 00:08:08.209 Firmware Update Granularity: No Information Provided 00:08:08.209 Per-Namespace SMART Log: Yes 00:08:08.209 Asymmetric Namespace Access Log Page: Not Supported 00:08:08.209 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:08.209 Command Effects Log Page: Supported 00:08:08.209 Get Log Page Extended Data: Supported 00:08:08.209 Telemetry Log Pages: Not Supported 00:08:08.209 Persistent Event Log Pages: Not Supported 00:08:08.209 Supported Log Pages Log Page: May Support 00:08:08.209 Commands Supported & Effects Log Page: Not Supported 00:08:08.209 Feature Identifiers & Effects Log Page:May Support 00:08:08.209 NVMe-MI Commands & Effects Log Page: May Support 00:08:08.209 Data Area 4 for Telemetry Log: Not Supported 00:08:08.209 Error Log Page Entries Supported: 1 00:08:08.209 Keep Alive: Not Supported 00:08:08.209 00:08:08.209 NVM Command Set Attributes 00:08:08.209 ========================== 00:08:08.209 Submission Queue Entry Size 00:08:08.209 Max: 64 00:08:08.209 Min: 64 00:08:08.209 Completion Queue Entry Size 00:08:08.209 Max: 16 00:08:08.209 Min: 16 00:08:08.209 Number of Namespaces: 256 00:08:08.209 Compare Command: Supported 00:08:08.209 Write Uncorrectable Command: Not Supported 00:08:08.209 Dataset Management Command: Supported 00:08:08.209 Write Zeroes Command: Supported 00:08:08.209 Set Features Save Field: Supported 00:08:08.209 Reservations: Not Supported 00:08:08.209 Timestamp: Supported 00:08:08.209 Copy: Supported 00:08:08.209 Volatile Write Cache: Present 00:08:08.209 Atomic Write Unit (Normal): 1 00:08:08.209 Atomic Write Unit (PFail): 1 00:08:08.209 Atomic Compare & Write Unit: 1 00:08:08.209 Fused Compare & Write: Not Supported 00:08:08.209 Scatter-Gather List 00:08:08.209 SGL Command Set: Supported 00:08:08.209 SGL Keyed: Not Supported 00:08:08.209 SGL Bit Bucket Descriptor: Not Supported 00:08:08.209 SGL Metadata Pointer: Not Supported 00:08:08.209 Oversized SGL: Not Supported 00:08:08.209 SGL Metadata Address: Not Supported 00:08:08.209 SGL Offset: Not Supported 00:08:08.209 Transport SGL Data Block: Not Supported 00:08:08.209 Replay Protected Memory Block: Not Supported 00:08:08.209 00:08:08.209 Firmware Slot Information 00:08:08.209 ========================= 00:08:08.209 Active slot: 1 00:08:08.209 Slot 1 Firmware Revision: 1.0 00:08:08.209 00:08:08.209 00:08:08.209 Commands Supported and Effects 00:08:08.209 ============================== 00:08:08.209 Admin Commands 00:08:08.209 -------------- 00:08:08.209 Delete I/O Submission Queue (00h): Supported 00:08:08.209 Create I/O Submission Queue (01h): Supported 00:08:08.209 Get Log Page (02h): Supported 00:08:08.209 Delete I/O Completion Queue (04h): Supported 00:08:08.209 Create I/O Completion Queue (05h): Supported 00:08:08.209 Identify (06h): Supported 00:08:08.209 Abort (08h): Supported 00:08:08.209 Set Features (09h): Supported 00:08:08.209 Get Features (0Ah): Supported 00:08:08.209 Asynchronous Event Request (0Ch): Supported 00:08:08.209 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:08.209 Directive Send (19h): Supported 00:08:08.209 Directive Receive (1Ah): Supported 00:08:08.209 Virtualization Management (1Ch): Supported 00:08:08.209 Doorbell Buffer Config (7Ch): Supported 00:08:08.209 Format NVM (80h): Supported LBA-Change 00:08:08.209 I/O Commands 00:08:08.209 ------------ 00:08:08.209 Flush (00h): Supported LBA-Change 00:08:08.209 Write (01h): Supported LBA-Change 00:08:08.209 Read (02h): Supported 00:08:08.209 Compare (05h): Supported 00:08:08.209 Write Zeroes (08h): Supported LBA-Change 00:08:08.209 Dataset Management (09h): Supported LBA-Change 00:08:08.209 Unknown (0Ch): Supported 00:08:08.209 Unknown (12h): Supported 00:08:08.209 Copy (19h): Supported LBA-Change 00:08:08.209 Unknown (1Dh): Supported LBA-Change 00:08:08.209 00:08:08.209 Error Log 00:08:08.209 ========= 00:08:08.209 00:08:08.209 Arbitration 00:08:08.209 =========== 00:08:08.209 Arbitration Burst: no limit 00:08:08.209 00:08:08.209 Power Management 00:08:08.209 ================ 00:08:08.209 Number of Power States: 1 00:08:08.209 Current Power State: Power State #0 00:08:08.209 Power State #0: 00:08:08.209 Max Power: 25.00 W 00:08:08.209 Non-Operational State: Operational 00:08:08.209 Entry Latency: 16 microseconds 00:08:08.209 Exit Latency: 4 microseconds 00:08:08.209 Relative Read Throughput: 0 00:08:08.209 Relative Read Latency: 0 00:08:08.209 Relative Write Throughput: 0 00:08:08.209 Relative Write Latency: 0 00:08:08.209 Idle Power: Not Reported 00:08:08.209 Active Power: Not Reported 00:08:08.209 Non-Operational Permissive Mode: Not Supported 00:08:08.209 00:08:08.209 Health Information 00:08:08.209 ================== 00:08:08.209 Critical Warnings: 00:08:08.209 Available Spare Space: OK 00:08:08.209 Temperature: OK 00:08:08.209 Device Reliability: OK 00:08:08.209 Read Only: No 00:08:08.209 Volatile Memory Backup: OK 00:08:08.209 Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.209 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:08.209 Available Spare: 0% 00:08:08.209 Available Spare Threshold: 0% 00:08:08.209 Life Percentage Used: 0% 00:08:08.209 Data Units Read: 879 00:08:08.209 Data Units Written: 808 00:08:08.209 Host Read Commands: 38681 00:08:08.209 Host Write Commands: 38104 00:08:08.209 Controller Busy Time: 0 minutes 00:08:08.209 Power Cycles: 0 00:08:08.209 Power On Hours: 0 hours 00:08:08.209 Unsafe Shutdowns: 0 00:08:08.209 Unrecoverable Media Errors: 0 00:08:08.209 Lifetime Error Log Entries: 0 00:08:08.209 Warning Temperature Time: 0 minutes 00:08:08.209 Critical Temperature Time: 0 minutes 00:08:08.209 00:08:08.209 Number of Queues 00:08:08.209 ================ 00:08:08.209 Number of I/O Submission Queues: 64 00:08:08.209 Number of I/O Completion Queues: 64 00:08:08.209 00:08:08.209 ZNS Specific Controller Data 00:08:08.209 ============================ 00:08:08.209 Zone Append Size Limit: 0 00:08:08.209 00:08:08.209 00:08:08.209 Active Namespaces 00:08:08.209 ================= 00:08:08.209 Namespace ID:1 00:08:08.209 Error Recovery Timeout: Unlimited 00:08:08.209 Command Set Identifier: NVM (00h) 00:08:08.209 Deallocate: Supported 00:08:08.209 Deallocated/Unwritten Error: Supported 00:08:08.209 Deallocated Read Value: All 0x00 00:08:08.209 Deallocate in Write Zeroes: Not Supported 00:08:08.209 Deallocated Guard Field: 0xFFFF 00:08:08.209 Flush: Supported 00:08:08.209 Reservation: Not Supported 00:08:08.209 Namespace Sharing Capabilities: Multiple Controllers 00:08:08.209 Size (in LBAs): 262144 (1GiB) 00:08:08.209 Capacity (in LBAs): 262144 (1GiB) 00:08:08.209 Utilization (in LBAs): 262144 (1GiB) 00:08:08.209 Thin Provisioning: Not Supported 00:08:08.209 Per-NS Atomic Units: No 00:08:08.209 Maximum Single Source Range Length: 128 00:08:08.209 Maximum Copy Length: 128 00:08:08.209 Maximum Source Range Count: 128 00:08:08.209 NGUID/EUI64 Never Reused: No 00:08:08.209 Namespace Write Protected: No 00:08:08.209 Endurance group ID: 1 00:08:08.209 Number of LBA Formats: 8 00:08:08.209 Current LBA Format: LBA Format #04 00:08:08.209 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:08.209 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:08.209 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:08.209 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:08.209 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:08.209 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:08.209 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:08.209 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:08.209 00:08:08.209 Get Feature FDP: 00:08:08.209 ================ 00:08:08.209 Enabled: Yes 00:08:08.209 FDP configuration index: 0 00:08:08.209 00:08:08.210 FDP configurations log page 00:08:08.210 =========================== 00:08:08.210 Number of FDP configurations: 1 00:08:08.210 Version: 0 00:08:08.210 Size: 112 00:08:08.210 FDP Configuration Descriptor: 0 00:08:08.210 Descriptor Size: 96 00:08:08.210 Reclaim Group Identifier format: 2 00:08:08.210 FDP Volatile Write Cache: Not Present 00:08:08.210 FDP Configuration: Valid 00:08:08.210 Vendor Specific Size: 0 00:08:08.210 Number of Reclaim Groups: 2 00:08:08.210 Number of Recalim Unit Handles: 8 00:08:08.210 Max Placement Identifiers: 128 00:08:08.210 Number of Namespaces Suppprted: 256 00:08:08.210 Reclaim unit Nominal Size: 6000000 bytes 00:08:08.210 Estimated Reclaim Unit Time Limit: Not Reported 00:08:08.210 RUH Desc #000: RUH Type: Initially Isolated 00:08:08.210 RUH Desc #001: RUH Type: Initially Isolated 00:08:08.210 RUH Desc #002: RUH Type: Initially Isolated 00:08:08.210 RUH Desc #003: RUH Type: Initially Isolated 00:08:08.210 RUH Desc #004: RUH Type: Initially Isolated 00:08:08.210 RUH Desc #005: RUH Type: Initially Isolated 00:08:08.210 RUH Desc #006: RUH Type: Initially Isolated 00:08:08.210 RUH Desc #007: RUH Type: Initially Isolated 00:08:08.210 00:08:08.210 FDP reclaim unit handle usage log page 00:08:08.210 ====================================== 00:08:08.210 Number of Reclaim Unit Handles: 8 00:08:08.210 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:08.210 RUH Usage Desc #001: RUH Attributes: Unused 00:08:08.210 RUH Usage Desc #002: RUH Attributes: Unused 00:08:08.210 RUH Usage Desc #003: RUH Attributes: Unused 00:08:08.210 RUH Usage Desc #004: RUH Attributes: Unused 00:08:08.210 RUH Usage Desc #005: RUH Attributes: Unused 00:08:08.210 RUH Usage Desc #006: RUH Attributes: Unused 00:08:08.210 RUH Usage Desc #007: RUH Attributes: Unused 00:08:08.210 00:08:08.210 FDP statistics log page 00:08:08.210 ======================= 00:08:08.210 Host bytes with metadata written: 515743744 00:08:08.210 Media bytes with metadata written: 515801088 00:08:08.210 Media bytes erased: 0 00:08:08.210 00:08:08.210 FDP events log page 00:08:08.210 =================== 00:08:08.210 Number of FDP events: 0 00:08:08.210 00:08:08.210 NVM Specific Namespace Data 00:08:08.210 =========================== 00:08:08.210 Logical Block Storage Tag Mask: 0 00:08:08.210 Protection Information Capabilities: 00:08:08.210 16b Guard Protection Information Storage Tag Support: No 00:08:08.210 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:08.210 Storage Tag Check Read Support: No 00:08:08.210 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.210 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.210 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.210 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.210 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.210 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.210 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.210 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.210 00:08:08.210 real 0m0.957s 00:08:08.210 user 0m0.318s 00:08:08.210 sys 0m0.446s 00:08:08.210 23:56:58 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.210 ************************************ 00:08:08.210 END TEST nvme_identify 00:08:08.210 23:56:58 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:08.210 ************************************ 00:08:08.210 23:56:58 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:08.210 23:56:58 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.210 23:56:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.210 23:56:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.210 ************************************ 00:08:08.210 START TEST nvme_perf 00:08:08.210 ************************************ 00:08:08.210 23:56:58 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:08.210 23:56:58 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:09.584 Initializing NVMe Controllers 00:08:09.584 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:09.584 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:09.584 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:09.584 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:09.584 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:09.585 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:09.585 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:09.585 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:09.585 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:09.585 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:09.585 Initialization complete. Launching workers. 00:08:09.585 ======================================================== 00:08:09.585 Latency(us) 00:08:09.585 Device Information : IOPS MiB/s Average min max 00:08:09.585 PCIE (0000:00:13.0) NSID 1 from core 0: 12337.17 144.58 10380.85 6579.75 31011.90 00:08:09.585 PCIE (0000:00:10.0) NSID 1 from core 0: 12337.17 144.58 10372.02 6201.68 30950.54 00:08:09.585 PCIE (0000:00:11.0) NSID 1 from core 0: 12337.17 144.58 10363.06 5821.41 30611.62 00:08:09.585 PCIE (0000:00:12.0) NSID 1 from core 0: 12337.17 144.58 10353.24 4888.57 30733.96 00:08:09.585 PCIE (0000:00:12.0) NSID 2 from core 0: 12337.17 144.58 10343.25 4515.75 30385.41 00:08:09.585 PCIE (0000:00:12.0) NSID 3 from core 0: 12401.09 145.33 10280.87 4083.74 23854.00 00:08:09.585 ======================================================== 00:08:09.585 Total : 74086.95 868.21 10348.82 4083.74 31011.90 00:08:09.585 00:08:09.585 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:09.585 ================================================================================= 00:08:09.585 1.00000% : 8318.031us 00:08:09.585 10.00000% : 9023.803us 00:08:09.585 25.00000% : 9427.102us 00:08:09.585 50.00000% : 10032.049us 00:08:09.585 75.00000% : 10838.646us 00:08:09.585 90.00000% : 11746.068us 00:08:09.585 95.00000% : 12754.314us 00:08:09.585 98.00000% : 14115.446us 00:08:09.585 99.00000% : 23088.837us 00:08:09.585 99.50000% : 29642.437us 00:08:09.585 99.90000% : 30852.332us 00:08:09.585 99.99000% : 31053.982us 00:08:09.585 99.99900% : 31053.982us 00:08:09.585 99.99990% : 31053.982us 00:08:09.585 99.99999% : 31053.982us 00:08:09.585 00:08:09.585 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:09.585 ================================================================================= 00:08:09.585 1.00000% : 8267.618us 00:08:09.585 10.00000% : 8973.391us 00:08:09.585 25.00000% : 9427.102us 00:08:09.585 50.00000% : 10082.462us 00:08:09.585 75.00000% : 10889.058us 00:08:09.585 90.00000% : 11746.068us 00:08:09.585 95.00000% : 12603.077us 00:08:09.585 98.00000% : 14115.446us 00:08:09.585 99.00000% : 22584.714us 00:08:09.585 99.50000% : 29440.788us 00:08:09.585 99.90000% : 30650.683us 00:08:09.585 99.99000% : 31053.982us 00:08:09.585 99.99900% : 31053.982us 00:08:09.585 99.99990% : 31053.982us 00:08:09.585 99.99999% : 31053.982us 00:08:09.585 00:08:09.585 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:09.585 ================================================================================= 00:08:09.585 1.00000% : 8217.206us 00:08:09.585 10.00000% : 8973.391us 00:08:09.585 25.00000% : 9427.102us 00:08:09.585 50.00000% : 10032.049us 00:08:09.585 75.00000% : 10889.058us 00:08:09.585 90.00000% : 11695.655us 00:08:09.585 95.00000% : 12603.077us 00:08:09.585 98.00000% : 14216.271us 00:08:09.585 99.00000% : 22080.591us 00:08:09.585 99.50000% : 29239.138us 00:08:09.585 99.90000% : 30449.034us 00:08:09.585 99.99000% : 30650.683us 00:08:09.585 99.99900% : 30650.683us 00:08:09.585 99.99990% : 30650.683us 00:08:09.585 99.99999% : 30650.683us 00:08:09.585 00:08:09.585 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:09.585 ================================================================================= 00:08:09.585 1.00000% : 8065.969us 00:08:09.585 10.00000% : 8973.391us 00:08:09.585 25.00000% : 9427.102us 00:08:09.585 50.00000% : 10082.462us 00:08:09.585 75.00000% : 10889.058us 00:08:09.585 90.00000% : 11746.068us 00:08:09.585 95.00000% : 12603.077us 00:08:09.585 98.00000% : 14014.622us 00:08:09.585 99.00000% : 22080.591us 00:08:09.585 99.50000% : 29440.788us 00:08:09.585 99.90000% : 30449.034us 00:08:09.585 99.99000% : 30852.332us 00:08:09.585 99.99900% : 30852.332us 00:08:09.585 99.99990% : 30852.332us 00:08:09.585 99.99999% : 30852.332us 00:08:09.585 00:08:09.585 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:09.585 ================================================================================= 00:08:09.585 1.00000% : 7965.145us 00:08:09.585 10.00000% : 8973.391us 00:08:09.585 25.00000% : 9427.102us 00:08:09.585 50.00000% : 10082.462us 00:08:09.585 75.00000% : 10889.058us 00:08:09.585 90.00000% : 11746.068us 00:08:09.585 95.00000% : 12703.902us 00:08:09.585 98.00000% : 14115.446us 00:08:09.585 99.00000% : 21979.766us 00:08:09.585 99.50000% : 29037.489us 00:08:09.585 99.90000% : 30247.385us 00:08:09.585 99.99000% : 30449.034us 00:08:09.585 99.99900% : 30449.034us 00:08:09.585 99.99990% : 30449.034us 00:08:09.585 99.99999% : 30449.034us 00:08:09.585 00:08:09.585 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:09.585 ================================================================================= 00:08:09.585 1.00000% : 7662.671us 00:08:09.585 10.00000% : 8973.391us 00:08:09.585 25.00000% : 9427.102us 00:08:09.585 50.00000% : 10032.049us 00:08:09.585 75.00000% : 10838.646us 00:08:09.585 90.00000% : 11746.068us 00:08:09.585 95.00000% : 12855.138us 00:08:09.585 98.00000% : 13913.797us 00:08:09.585 99.00000% : 16333.588us 00:08:09.585 99.50000% : 22483.889us 00:08:09.585 99.90000% : 23592.960us 00:08:09.585 99.99000% : 23895.434us 00:08:09.585 99.99900% : 23895.434us 00:08:09.585 99.99990% : 23895.434us 00:08:09.585 99.99999% : 23895.434us 00:08:09.585 00:08:09.585 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:09.585 ============================================================================== 00:08:09.585 Range in us Cumulative IO count 00:08:09.585 6553.600 - 6604.012: 0.0648% ( 8) 00:08:09.585 6604.012 - 6654.425: 0.0891% ( 3) 00:08:09.585 6654.425 - 6704.837: 0.1133% ( 3) 00:08:09.585 6704.837 - 6755.249: 0.1538% ( 5) 00:08:09.585 6755.249 - 6805.662: 0.2024% ( 6) 00:08:09.585 6805.662 - 6856.074: 0.2348% ( 4) 00:08:09.585 6856.074 - 6906.486: 0.2753% ( 5) 00:08:09.585 6906.486 - 6956.898: 0.3076% ( 4) 00:08:09.585 6956.898 - 7007.311: 0.3481% ( 5) 00:08:09.585 7007.311 - 7057.723: 0.3805% ( 4) 00:08:09.585 7057.723 - 7108.135: 0.4129% ( 4) 00:08:09.585 7108.135 - 7158.548: 0.4534% ( 5) 00:08:09.585 7158.548 - 7208.960: 0.4858% ( 4) 00:08:09.585 7208.960 - 7259.372: 0.5181% ( 4) 00:08:09.585 8065.969 - 8116.382: 0.5262% ( 1) 00:08:09.585 8116.382 - 8166.794: 0.5748% ( 6) 00:08:09.585 8166.794 - 8217.206: 0.6962% ( 15) 00:08:09.585 8217.206 - 8267.618: 0.8258% ( 16) 00:08:09.585 8267.618 - 8318.031: 1.0201% ( 24) 00:08:09.585 8318.031 - 8368.443: 1.1901% ( 21) 00:08:09.585 8368.443 - 8418.855: 1.4168% ( 28) 00:08:09.585 8418.855 - 8469.268: 1.6597% ( 30) 00:08:09.585 8469.268 - 8519.680: 1.9187% ( 32) 00:08:09.585 8519.680 - 8570.092: 2.2911% ( 46) 00:08:09.585 8570.092 - 8620.505: 2.6959% ( 50) 00:08:09.585 8620.505 - 8670.917: 3.2950% ( 74) 00:08:09.585 8670.917 - 8721.329: 4.0479% ( 93) 00:08:09.585 8721.329 - 8771.742: 4.7766% ( 90) 00:08:09.585 8771.742 - 8822.154: 5.7157% ( 116) 00:08:09.585 8822.154 - 8872.566: 6.7843% ( 132) 00:08:09.585 8872.566 - 8922.978: 8.0230% ( 153) 00:08:09.585 8922.978 - 8973.391: 9.4236% ( 173) 00:08:09.585 8973.391 - 9023.803: 10.8646% ( 178) 00:08:09.585 9023.803 - 9074.215: 12.3381% ( 182) 00:08:09.585 9074.215 - 9124.628: 14.0301% ( 209) 00:08:09.585 9124.628 - 9175.040: 15.8193% ( 221) 00:08:09.585 9175.040 - 9225.452: 17.7218% ( 235) 00:08:09.585 9225.452 - 9275.865: 19.8429% ( 262) 00:08:09.585 9275.865 - 9326.277: 21.8912% ( 253) 00:08:09.585 9326.277 - 9376.689: 24.1095% ( 274) 00:08:09.585 9376.689 - 9427.102: 26.2468% ( 264) 00:08:09.585 9427.102 - 9477.514: 28.2950% ( 253) 00:08:09.585 9477.514 - 9527.926: 30.4323% ( 264) 00:08:09.585 9527.926 - 9578.338: 32.5453% ( 261) 00:08:09.585 9578.338 - 9628.751: 34.5936% ( 253) 00:08:09.585 9628.751 - 9679.163: 36.6256% ( 251) 00:08:09.585 9679.163 - 9729.575: 38.6091% ( 245) 00:08:09.585 9729.575 - 9779.988: 40.6493% ( 252) 00:08:09.585 9779.988 - 9830.400: 42.6085% ( 242) 00:08:09.585 9830.400 - 9880.812: 44.4867% ( 232) 00:08:09.585 9880.812 - 9931.225: 46.4540% ( 243) 00:08:09.585 9931.225 - 9981.637: 48.4780% ( 250) 00:08:09.585 9981.637 - 10032.049: 50.2915% ( 224) 00:08:09.585 10032.049 - 10082.462: 52.1778% ( 233) 00:08:09.585 10082.462 - 10132.874: 54.0884% ( 236) 00:08:09.585 10132.874 - 10183.286: 55.8533% ( 218) 00:08:09.585 10183.286 - 10233.698: 57.4968% ( 203) 00:08:09.585 10233.698 - 10284.111: 59.1078% ( 199) 00:08:09.585 10284.111 - 10334.523: 60.6784% ( 194) 00:08:09.585 10334.523 - 10384.935: 62.2005% ( 188) 00:08:09.585 10384.935 - 10435.348: 63.7549% ( 192) 00:08:09.585 10435.348 - 10485.760: 65.3174% ( 193) 00:08:09.585 10485.760 - 10536.172: 66.7422% ( 176) 00:08:09.585 10536.172 - 10586.585: 68.2723% ( 189) 00:08:09.585 10586.585 - 10636.997: 69.7620% ( 184) 00:08:09.585 10636.997 - 10687.409: 71.2516% ( 184) 00:08:09.585 10687.409 - 10737.822: 72.6441% ( 172) 00:08:09.585 10737.822 - 10788.234: 73.9637% ( 163) 00:08:09.585 10788.234 - 10838.646: 75.2024% ( 153) 00:08:09.585 10838.646 - 10889.058: 76.4006% ( 148) 00:08:09.585 10889.058 - 10939.471: 77.5097% ( 137) 00:08:09.585 10939.471 - 10989.883: 78.5865% ( 133) 00:08:09.585 10989.883 - 11040.295: 79.6794% ( 135) 00:08:09.585 11040.295 - 11090.708: 80.6995% ( 126) 00:08:09.586 11090.708 - 11141.120: 81.6953% ( 123) 00:08:09.586 11141.120 - 11191.532: 82.5858% ( 110) 00:08:09.586 11191.532 - 11241.945: 83.5816% ( 123) 00:08:09.586 11241.945 - 11292.357: 84.4398% ( 106) 00:08:09.586 11292.357 - 11342.769: 85.1846% ( 92) 00:08:09.586 11342.769 - 11393.182: 85.9213% ( 91) 00:08:09.586 11393.182 - 11443.594: 86.6337% ( 88) 00:08:09.586 11443.594 - 11494.006: 87.3219% ( 85) 00:08:09.586 11494.006 - 11544.418: 87.9777% ( 81) 00:08:09.586 11544.418 - 11594.831: 88.5039% ( 65) 00:08:09.586 11594.831 - 11645.243: 89.0787% ( 71) 00:08:09.586 11645.243 - 11695.655: 89.6454% ( 70) 00:08:09.586 11695.655 - 11746.068: 90.1797% ( 66) 00:08:09.586 11746.068 - 11796.480: 90.6412% ( 57) 00:08:09.586 11796.480 - 11846.892: 91.0703% ( 53) 00:08:09.586 11846.892 - 11897.305: 91.4346% ( 45) 00:08:09.586 11897.305 - 11947.717: 91.7341% ( 37) 00:08:09.586 11947.717 - 11998.129: 92.0094% ( 34) 00:08:09.586 11998.129 - 12048.542: 92.2766% ( 33) 00:08:09.586 12048.542 - 12098.954: 92.4951% ( 27) 00:08:09.586 12098.954 - 12149.366: 92.6733% ( 22) 00:08:09.586 12149.366 - 12199.778: 92.8514% ( 22) 00:08:09.586 12199.778 - 12250.191: 93.0214% ( 21) 00:08:09.586 12250.191 - 12300.603: 93.2076% ( 23) 00:08:09.586 12300.603 - 12351.015: 93.4100% ( 25) 00:08:09.586 12351.015 - 12401.428: 93.6124% ( 25) 00:08:09.586 12401.428 - 12451.840: 93.8148% ( 25) 00:08:09.586 12451.840 - 12502.252: 94.0091% ( 24) 00:08:09.586 12502.252 - 12552.665: 94.1953% ( 23) 00:08:09.586 12552.665 - 12603.077: 94.4543% ( 32) 00:08:09.586 12603.077 - 12653.489: 94.6648% ( 26) 00:08:09.586 12653.489 - 12703.902: 94.8834% ( 27) 00:08:09.586 12703.902 - 12754.314: 95.1263% ( 30) 00:08:09.586 12754.314 - 12804.726: 95.3611% ( 29) 00:08:09.586 12804.726 - 12855.138: 95.5716% ( 26) 00:08:09.586 12855.138 - 12905.551: 95.7740% ( 25) 00:08:09.586 12905.551 - 13006.375: 96.1302% ( 44) 00:08:09.586 13006.375 - 13107.200: 96.4459% ( 39) 00:08:09.586 13107.200 - 13208.025: 96.7050% ( 32) 00:08:09.586 13208.025 - 13308.849: 96.9074% ( 25) 00:08:09.586 13308.849 - 13409.674: 97.1665% ( 32) 00:08:09.586 13409.674 - 13510.498: 97.4174% ( 31) 00:08:09.586 13510.498 - 13611.323: 97.6036% ( 23) 00:08:09.586 13611.323 - 13712.148: 97.7332% ( 16) 00:08:09.586 13712.148 - 13812.972: 97.8222% ( 11) 00:08:09.586 13812.972 - 13913.797: 97.9032% ( 10) 00:08:09.586 13913.797 - 14014.622: 97.9841% ( 10) 00:08:09.586 14014.622 - 14115.446: 98.0813% ( 12) 00:08:09.586 14115.446 - 14216.271: 98.1541% ( 9) 00:08:09.586 14216.271 - 14317.095: 98.1946% ( 5) 00:08:09.586 14317.095 - 14417.920: 98.2189% ( 3) 00:08:09.586 14417.920 - 14518.745: 98.2432% ( 3) 00:08:09.586 14518.745 - 14619.569: 98.3080% ( 8) 00:08:09.586 14619.569 - 14720.394: 98.3727% ( 8) 00:08:09.586 14720.394 - 14821.218: 98.4456% ( 9) 00:08:09.586 14821.218 - 14922.043: 98.5266% ( 10) 00:08:09.586 14922.043 - 15022.868: 98.6075% ( 10) 00:08:09.586 15022.868 - 15123.692: 98.6885% ( 10) 00:08:09.586 15123.692 - 15224.517: 98.7613% ( 9) 00:08:09.586 15224.517 - 15325.342: 98.8261% ( 8) 00:08:09.586 15325.342 - 15426.166: 98.8666% ( 5) 00:08:09.586 15426.166 - 15526.991: 98.9233% ( 7) 00:08:09.586 15526.991 - 15627.815: 98.9637% ( 5) 00:08:09.586 22887.188 - 22988.012: 98.9718% ( 1) 00:08:09.586 22988.012 - 23088.837: 99.0123% ( 5) 00:08:09.586 23088.837 - 23189.662: 99.0528% ( 5) 00:08:09.586 23189.662 - 23290.486: 99.1014% ( 6) 00:08:09.586 23290.486 - 23391.311: 99.1580% ( 7) 00:08:09.586 23391.311 - 23492.135: 99.2066% ( 6) 00:08:09.586 23492.135 - 23592.960: 99.2471% ( 5) 00:08:09.586 23592.960 - 23693.785: 99.3038% ( 7) 00:08:09.586 23693.785 - 23794.609: 99.3604% ( 7) 00:08:09.586 23794.609 - 23895.434: 99.4171% ( 7) 00:08:09.586 23895.434 - 23996.258: 99.4576% ( 5) 00:08:09.586 23996.258 - 24097.083: 99.4819% ( 3) 00:08:09.586 29440.788 - 29642.437: 99.5385% ( 7) 00:08:09.586 29642.437 - 29844.086: 99.6033% ( 8) 00:08:09.586 29844.086 - 30045.735: 99.6681% ( 8) 00:08:09.586 30045.735 - 30247.385: 99.7328% ( 8) 00:08:09.586 30247.385 - 30449.034: 99.7895% ( 7) 00:08:09.586 30449.034 - 30650.683: 99.8624% ( 9) 00:08:09.586 30650.683 - 30852.332: 99.9433% ( 10) 00:08:09.586 30852.332 - 31053.982: 100.0000% ( 7) 00:08:09.586 00:08:09.586 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:09.586 ============================================================================== 00:08:09.586 Range in us Cumulative IO count 00:08:09.586 6200.714 - 6225.920: 0.0081% ( 1) 00:08:09.586 6225.920 - 6251.126: 0.0243% ( 2) 00:08:09.586 6251.126 - 6276.332: 0.0486% ( 3) 00:08:09.586 6276.332 - 6301.538: 0.0648% ( 2) 00:08:09.586 6301.538 - 6326.745: 0.0810% ( 2) 00:08:09.586 6326.745 - 6351.951: 0.0891% ( 1) 00:08:09.586 6351.951 - 6377.157: 0.1133% ( 3) 00:08:09.586 6377.157 - 6402.363: 0.1295% ( 2) 00:08:09.586 6402.363 - 6427.569: 0.1376% ( 1) 00:08:09.586 6427.569 - 6452.775: 0.1619% ( 3) 00:08:09.586 6452.775 - 6503.188: 0.1862% ( 3) 00:08:09.586 6503.188 - 6553.600: 0.2105% ( 3) 00:08:09.586 6553.600 - 6604.012: 0.2510% ( 5) 00:08:09.586 6604.012 - 6654.425: 0.2834% ( 4) 00:08:09.586 6654.425 - 6704.837: 0.3076% ( 3) 00:08:09.586 6704.837 - 6755.249: 0.3400% ( 4) 00:08:09.586 6755.249 - 6805.662: 0.3724% ( 4) 00:08:09.586 6805.662 - 6856.074: 0.4048% ( 4) 00:08:09.586 6856.074 - 6906.486: 0.4291% ( 3) 00:08:09.586 6906.486 - 6956.898: 0.4615% ( 4) 00:08:09.586 6956.898 - 7007.311: 0.4938% ( 4) 00:08:09.586 7007.311 - 7057.723: 0.5181% ( 3) 00:08:09.586 7864.320 - 7914.732: 0.5424% ( 3) 00:08:09.586 7914.732 - 7965.145: 0.5748% ( 4) 00:08:09.586 7965.145 - 8015.557: 0.5991% ( 3) 00:08:09.586 8015.557 - 8065.969: 0.6477% ( 6) 00:08:09.586 8065.969 - 8116.382: 0.7448% ( 12) 00:08:09.586 8116.382 - 8166.794: 0.8420% ( 12) 00:08:09.586 8166.794 - 8217.206: 0.9553% ( 14) 00:08:09.586 8217.206 - 8267.618: 1.1658% ( 26) 00:08:09.586 8267.618 - 8318.031: 1.3763% ( 26) 00:08:09.586 8318.031 - 8368.443: 1.6597% ( 35) 00:08:09.586 8368.443 - 8418.855: 1.9268% ( 33) 00:08:09.586 8418.855 - 8469.268: 2.2264% ( 37) 00:08:09.586 8469.268 - 8519.680: 2.5421% ( 39) 00:08:09.586 8519.680 - 8570.092: 2.9307% ( 48) 00:08:09.586 8570.092 - 8620.505: 3.3598% ( 53) 00:08:09.586 8620.505 - 8670.917: 4.0479% ( 85) 00:08:09.586 8670.917 - 8721.329: 4.8008% ( 93) 00:08:09.586 8721.329 - 8771.742: 5.8371% ( 128) 00:08:09.586 8771.742 - 8822.154: 6.8734% ( 128) 00:08:09.586 8822.154 - 8872.566: 7.9339% ( 131) 00:08:09.586 8872.566 - 8922.978: 9.2374% ( 161) 00:08:09.586 8922.978 - 8973.391: 10.6703% ( 177) 00:08:09.586 8973.391 - 9023.803: 12.1600% ( 184) 00:08:09.586 9023.803 - 9074.215: 13.7710% ( 199) 00:08:09.586 9074.215 - 9124.628: 15.3902% ( 200) 00:08:09.586 9124.628 - 9175.040: 17.0661% ( 207) 00:08:09.586 9175.040 - 9225.452: 18.7824% ( 212) 00:08:09.586 9225.452 - 9275.865: 20.4339% ( 204) 00:08:09.586 9275.865 - 9326.277: 22.3527% ( 237) 00:08:09.586 9326.277 - 9376.689: 24.3119% ( 242) 00:08:09.586 9376.689 - 9427.102: 26.2306% ( 237) 00:08:09.586 9427.102 - 9477.514: 28.0926% ( 230) 00:08:09.586 9477.514 - 9527.926: 30.2947% ( 272) 00:08:09.586 9527.926 - 9578.338: 32.3591% ( 255) 00:08:09.586 9578.338 - 9628.751: 34.3831% ( 250) 00:08:09.586 9628.751 - 9679.163: 36.5042% ( 262) 00:08:09.586 9679.163 - 9729.575: 38.5444% ( 252) 00:08:09.586 9729.575 - 9779.988: 40.4712% ( 238) 00:08:09.587 9779.988 - 9830.400: 42.2118% ( 215) 00:08:09.587 9830.400 - 9880.812: 44.1872% ( 244) 00:08:09.587 9880.812 - 9931.225: 46.0573% ( 231) 00:08:09.587 9931.225 - 9981.637: 47.8951% ( 227) 00:08:09.587 9981.637 - 10032.049: 49.7166% ( 225) 00:08:09.587 10032.049 - 10082.462: 51.4653% ( 216) 00:08:09.587 10082.462 - 10132.874: 53.0602% ( 197) 00:08:09.587 10132.874 - 10183.286: 54.6227% ( 193) 00:08:09.587 10183.286 - 10233.698: 56.3552% ( 214) 00:08:09.587 10233.698 - 10284.111: 57.9258% ( 194) 00:08:09.587 10284.111 - 10334.523: 59.5855% ( 205) 00:08:09.587 10334.523 - 10384.935: 61.1156% ( 189) 00:08:09.587 10384.935 - 10435.348: 62.5972% ( 183) 00:08:09.587 10435.348 - 10485.760: 64.0220% ( 176) 00:08:09.587 10485.760 - 10536.172: 65.5521% ( 189) 00:08:09.587 10536.172 - 10586.585: 67.0742% ( 188) 00:08:09.587 10586.585 - 10636.997: 68.5476% ( 182) 00:08:09.587 10636.997 - 10687.409: 70.1263% ( 195) 00:08:09.587 10687.409 - 10737.822: 71.5835% ( 180) 00:08:09.587 10737.822 - 10788.234: 72.9356% ( 167) 00:08:09.587 10788.234 - 10838.646: 74.2471% ( 162) 00:08:09.587 10838.646 - 10889.058: 75.6801% ( 177) 00:08:09.587 10889.058 - 10939.471: 76.9673% ( 159) 00:08:09.587 10939.471 - 10989.883: 78.1655% ( 148) 00:08:09.587 10989.883 - 11040.295: 79.2422% ( 133) 00:08:09.587 11040.295 - 11090.708: 80.2704% ( 127) 00:08:09.587 11090.708 - 11141.120: 81.3472% ( 133) 00:08:09.587 11141.120 - 11191.532: 82.2944% ( 117) 00:08:09.587 11191.532 - 11241.945: 83.2092% ( 113) 00:08:09.587 11241.945 - 11292.357: 84.1078% ( 111) 00:08:09.587 11292.357 - 11342.769: 85.0227% ( 113) 00:08:09.587 11342.769 - 11393.182: 85.8080% ( 97) 00:08:09.587 11393.182 - 11443.594: 86.5366% ( 90) 00:08:09.587 11443.594 - 11494.006: 87.3138% ( 96) 00:08:09.587 11494.006 - 11544.418: 88.0424% ( 90) 00:08:09.587 11544.418 - 11594.831: 88.6172% ( 71) 00:08:09.587 11594.831 - 11645.243: 89.2082% ( 73) 00:08:09.587 11645.243 - 11695.655: 89.7183% ( 63) 00:08:09.587 11695.655 - 11746.068: 90.1716% ( 56) 00:08:09.587 11746.068 - 11796.480: 90.6412% ( 58) 00:08:09.587 11796.480 - 11846.892: 91.0865% ( 55) 00:08:09.587 11846.892 - 11897.305: 91.5560% ( 58) 00:08:09.587 11897.305 - 11947.717: 91.9365% ( 47) 00:08:09.587 11947.717 - 11998.129: 92.3089% ( 46) 00:08:09.587 11998.129 - 12048.542: 92.6409% ( 41) 00:08:09.587 12048.542 - 12098.954: 92.9404% ( 37) 00:08:09.587 12098.954 - 12149.366: 93.1914% ( 31) 00:08:09.587 12149.366 - 12199.778: 93.3938% ( 25) 00:08:09.587 12199.778 - 12250.191: 93.5881% ( 24) 00:08:09.587 12250.191 - 12300.603: 93.8229% ( 29) 00:08:09.587 12300.603 - 12351.015: 94.0495% ( 28) 00:08:09.587 12351.015 - 12401.428: 94.2519% ( 25) 00:08:09.587 12401.428 - 12451.840: 94.4624% ( 26) 00:08:09.587 12451.840 - 12502.252: 94.6810% ( 27) 00:08:09.587 12502.252 - 12552.665: 94.8753% ( 24) 00:08:09.587 12552.665 - 12603.077: 95.0615% ( 23) 00:08:09.587 12603.077 - 12653.489: 95.2558% ( 24) 00:08:09.587 12653.489 - 12703.902: 95.3854% ( 16) 00:08:09.587 12703.902 - 12754.314: 95.6120% ( 28) 00:08:09.587 12754.314 - 12804.726: 95.7821% ( 21) 00:08:09.587 12804.726 - 12855.138: 95.9521% ( 21) 00:08:09.587 12855.138 - 12905.551: 96.0978% ( 18) 00:08:09.587 12905.551 - 13006.375: 96.4054% ( 38) 00:08:09.587 13006.375 - 13107.200: 96.6807% ( 34) 00:08:09.587 13107.200 - 13208.025: 96.9074% ( 28) 00:08:09.587 13208.025 - 13308.849: 97.1098% ( 25) 00:08:09.587 13308.849 - 13409.674: 97.2798% ( 21) 00:08:09.587 13409.674 - 13510.498: 97.4012% ( 15) 00:08:09.587 13510.498 - 13611.323: 97.5146% ( 14) 00:08:09.587 13611.323 - 13712.148: 97.6198% ( 13) 00:08:09.587 13712.148 - 13812.972: 97.7655% ( 18) 00:08:09.587 13812.972 - 13913.797: 97.8627% ( 12) 00:08:09.587 13913.797 - 14014.622: 97.9517% ( 11) 00:08:09.587 14014.622 - 14115.446: 98.0246% ( 9) 00:08:09.587 14115.446 - 14216.271: 98.1137% ( 11) 00:08:09.587 14216.271 - 14317.095: 98.1784% ( 8) 00:08:09.587 14317.095 - 14417.920: 98.2513% ( 9) 00:08:09.587 14417.920 - 14518.745: 98.3323% ( 10) 00:08:09.587 14518.745 - 14619.569: 98.3808% ( 6) 00:08:09.587 14619.569 - 14720.394: 98.4456% ( 8) 00:08:09.587 14720.394 - 14821.218: 98.5104% ( 8) 00:08:09.587 14821.218 - 14922.043: 98.6075% ( 12) 00:08:09.587 14922.043 - 15022.868: 98.6642% ( 7) 00:08:09.587 15022.868 - 15123.692: 98.7128% ( 6) 00:08:09.587 15123.692 - 15224.517: 98.7775% ( 8) 00:08:09.587 15224.517 - 15325.342: 98.8909% ( 14) 00:08:09.587 15325.342 - 15426.166: 98.9233% ( 4) 00:08:09.587 15426.166 - 15526.991: 98.9637% ( 5) 00:08:09.587 22383.065 - 22483.889: 98.9880% ( 3) 00:08:09.587 22483.889 - 22584.714: 99.0285% ( 5) 00:08:09.587 22584.714 - 22685.538: 99.0690% ( 5) 00:08:09.587 22685.538 - 22786.363: 99.1176% ( 6) 00:08:09.587 22786.363 - 22887.188: 99.1418% ( 3) 00:08:09.587 22887.188 - 22988.012: 99.1823% ( 5) 00:08:09.587 22988.012 - 23088.837: 99.2390% ( 7) 00:08:09.587 23088.837 - 23189.662: 99.2795% ( 5) 00:08:09.587 23189.662 - 23290.486: 99.3038% ( 3) 00:08:09.587 23290.486 - 23391.311: 99.3523% ( 6) 00:08:09.587 23391.311 - 23492.135: 99.3928% ( 5) 00:08:09.587 23492.135 - 23592.960: 99.4333% ( 5) 00:08:09.587 23592.960 - 23693.785: 99.4819% ( 6) 00:08:09.587 29037.489 - 29239.138: 99.4981% ( 2) 00:08:09.587 29239.138 - 29440.788: 99.5628% ( 8) 00:08:09.587 29440.788 - 29642.437: 99.6114% ( 6) 00:08:09.587 29642.437 - 29844.086: 99.6762% ( 8) 00:08:09.587 29844.086 - 30045.735: 99.7571% ( 10) 00:08:09.587 30045.735 - 30247.385: 99.8057% ( 6) 00:08:09.587 30247.385 - 30449.034: 99.8624% ( 7) 00:08:09.587 30449.034 - 30650.683: 99.9190% ( 7) 00:08:09.587 30650.683 - 30852.332: 99.9676% ( 6) 00:08:09.587 30852.332 - 31053.982: 100.0000% ( 4) 00:08:09.587 00:08:09.587 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:09.587 ============================================================================== 00:08:09.587 Range in us Cumulative IO count 00:08:09.587 5797.415 - 5822.622: 0.0081% ( 1) 00:08:09.587 5822.622 - 5847.828: 0.0243% ( 2) 00:08:09.587 5847.828 - 5873.034: 0.0405% ( 2) 00:08:09.587 5873.034 - 5898.240: 0.0567% ( 2) 00:08:09.587 5898.240 - 5923.446: 0.0729% ( 2) 00:08:09.587 5923.446 - 5948.652: 0.0972% ( 3) 00:08:09.587 5948.652 - 5973.858: 0.1052% ( 1) 00:08:09.587 5973.858 - 5999.065: 0.1295% ( 3) 00:08:09.587 5999.065 - 6024.271: 0.1538% ( 3) 00:08:09.587 6024.271 - 6049.477: 0.1781% ( 3) 00:08:09.587 6049.477 - 6074.683: 0.2024% ( 3) 00:08:09.587 6074.683 - 6099.889: 0.2186% ( 2) 00:08:09.587 6099.889 - 6125.095: 0.2348% ( 2) 00:08:09.587 6125.095 - 6150.302: 0.2510% ( 2) 00:08:09.587 6150.302 - 6175.508: 0.2753% ( 3) 00:08:09.587 6175.508 - 6200.714: 0.2915% ( 2) 00:08:09.587 6200.714 - 6225.920: 0.3157% ( 3) 00:08:09.587 6225.920 - 6251.126: 0.3319% ( 2) 00:08:09.587 6251.126 - 6276.332: 0.3481% ( 2) 00:08:09.587 6276.332 - 6301.538: 0.3643% ( 2) 00:08:09.587 6301.538 - 6326.745: 0.3886% ( 3) 00:08:09.587 6326.745 - 6351.951: 0.4048% ( 2) 00:08:09.587 6351.951 - 6377.157: 0.4210% ( 2) 00:08:09.587 6377.157 - 6402.363: 0.4372% ( 2) 00:08:09.587 6402.363 - 6427.569: 0.4534% ( 2) 00:08:09.587 6427.569 - 6452.775: 0.4777% ( 3) 00:08:09.587 6452.775 - 6503.188: 0.5100% ( 4) 00:08:09.587 6503.188 - 6553.600: 0.5181% ( 1) 00:08:09.587 7713.083 - 7763.495: 0.5505% ( 4) 00:08:09.587 7763.495 - 7813.908: 0.5748% ( 3) 00:08:09.587 7813.908 - 7864.320: 0.6072% ( 4) 00:08:09.587 7864.320 - 7914.732: 0.6396% ( 4) 00:08:09.587 7914.732 - 7965.145: 0.6720% ( 4) 00:08:09.587 7965.145 - 8015.557: 0.6962% ( 3) 00:08:09.587 8015.557 - 8065.969: 0.7367% ( 5) 00:08:09.587 8065.969 - 8116.382: 0.8663% ( 16) 00:08:09.587 8116.382 - 8166.794: 0.9472% ( 10) 00:08:09.587 8166.794 - 8217.206: 1.0363% ( 11) 00:08:09.587 8217.206 - 8267.618: 1.2144% ( 22) 00:08:09.587 8267.618 - 8318.031: 1.4330% ( 27) 00:08:09.587 8318.031 - 8368.443: 1.7406% ( 38) 00:08:09.587 8368.443 - 8418.855: 2.0321% ( 36) 00:08:09.587 8418.855 - 8469.268: 2.4126% ( 47) 00:08:09.587 8469.268 - 8519.680: 2.7607% ( 43) 00:08:09.587 8519.680 - 8570.092: 3.1493% ( 48) 00:08:09.587 8570.092 - 8620.505: 3.5541% ( 50) 00:08:09.587 8620.505 - 8670.917: 4.0074% ( 56) 00:08:09.587 8670.917 - 8721.329: 4.6227% ( 76) 00:08:09.587 8721.329 - 8771.742: 5.3837% ( 94) 00:08:09.587 8771.742 - 8822.154: 6.4200% ( 128) 00:08:09.587 8822.154 - 8872.566: 7.6101% ( 147) 00:08:09.587 8872.566 - 8922.978: 8.7921% ( 146) 00:08:09.587 8922.978 - 8973.391: 10.2574% ( 181) 00:08:09.587 8973.391 - 9023.803: 11.7552% ( 185) 00:08:09.587 9023.803 - 9074.215: 13.3339% ( 195) 00:08:09.587 9074.215 - 9124.628: 14.8802% ( 191) 00:08:09.587 9124.628 - 9175.040: 16.4832% ( 198) 00:08:09.587 9175.040 - 9225.452: 18.2562% ( 219) 00:08:09.587 9225.452 - 9275.865: 20.0291% ( 219) 00:08:09.587 9275.865 - 9326.277: 21.8588% ( 226) 00:08:09.587 9326.277 - 9376.689: 23.8585% ( 247) 00:08:09.587 9376.689 - 9427.102: 25.9877% ( 263) 00:08:09.587 9427.102 - 9477.514: 28.2060% ( 274) 00:08:09.587 9477.514 - 9527.926: 30.4161% ( 273) 00:08:09.587 9527.926 - 9578.338: 32.5453% ( 263) 00:08:09.587 9578.338 - 9628.751: 34.6988% ( 266) 00:08:09.587 9628.751 - 9679.163: 36.7552% ( 254) 00:08:09.587 9679.163 - 9729.575: 38.7630% ( 248) 00:08:09.587 9729.575 - 9779.988: 40.6169% ( 229) 00:08:09.587 9779.988 - 9830.400: 42.4709% ( 229) 00:08:09.587 9830.400 - 9880.812: 44.3086% ( 227) 00:08:09.587 9880.812 - 9931.225: 46.2354% ( 238) 00:08:09.587 9931.225 - 9981.637: 48.1137% ( 232) 00:08:09.587 9981.637 - 10032.049: 50.0567% ( 240) 00:08:09.587 10032.049 - 10082.462: 51.8863% ( 226) 00:08:09.587 10082.462 - 10132.874: 53.5298% ( 203) 00:08:09.588 10132.874 - 10183.286: 55.2866% ( 217) 00:08:09.588 10183.286 - 10233.698: 57.0191% ( 214) 00:08:09.588 10233.698 - 10284.111: 58.5735% ( 192) 00:08:09.588 10284.111 - 10334.523: 60.1441% ( 194) 00:08:09.588 10334.523 - 10384.935: 61.6904% ( 191) 00:08:09.588 10384.935 - 10435.348: 63.2124% ( 188) 00:08:09.588 10435.348 - 10485.760: 64.6535% ( 178) 00:08:09.588 10485.760 - 10536.172: 66.0703% ( 175) 00:08:09.588 10536.172 - 10586.585: 67.4466% ( 170) 00:08:09.588 10586.585 - 10636.997: 68.8472% ( 173) 00:08:09.588 10636.997 - 10687.409: 70.3287% ( 183) 00:08:09.588 10687.409 - 10737.822: 71.6645% ( 165) 00:08:09.588 10737.822 - 10788.234: 72.8789% ( 150) 00:08:09.588 10788.234 - 10838.646: 74.0447% ( 144) 00:08:09.588 10838.646 - 10889.058: 75.2915% ( 154) 00:08:09.588 10889.058 - 10939.471: 76.5058% ( 150) 00:08:09.588 10939.471 - 10989.883: 77.6473% ( 141) 00:08:09.588 10989.883 - 11040.295: 78.7970% ( 142) 00:08:09.588 11040.295 - 11090.708: 79.8818% ( 134) 00:08:09.588 11090.708 - 11141.120: 80.9100% ( 127) 00:08:09.588 11141.120 - 11191.532: 81.9543% ( 129) 00:08:09.588 11191.532 - 11241.945: 82.9501% ( 123) 00:08:09.588 11241.945 - 11292.357: 83.9621% ( 125) 00:08:09.588 11292.357 - 11342.769: 84.8850% ( 114) 00:08:09.588 11342.769 - 11393.182: 85.7999% ( 113) 00:08:09.588 11393.182 - 11443.594: 86.6580% ( 106) 00:08:09.588 11443.594 - 11494.006: 87.4433% ( 97) 00:08:09.588 11494.006 - 11544.418: 88.1234% ( 84) 00:08:09.588 11544.418 - 11594.831: 88.8115% ( 85) 00:08:09.588 11594.831 - 11645.243: 89.4511% ( 79) 00:08:09.588 11645.243 - 11695.655: 90.0745% ( 77) 00:08:09.588 11695.655 - 11746.068: 90.6412% ( 70) 00:08:09.588 11746.068 - 11796.480: 91.0784% ( 54) 00:08:09.588 11796.480 - 11846.892: 91.5398% ( 57) 00:08:09.588 11846.892 - 11897.305: 91.9365% ( 49) 00:08:09.588 11897.305 - 11947.717: 92.3413% ( 50) 00:08:09.588 11947.717 - 11998.129: 92.6571% ( 39) 00:08:09.588 11998.129 - 12048.542: 92.9566% ( 37) 00:08:09.588 12048.542 - 12098.954: 93.2076% ( 31) 00:08:09.588 12098.954 - 12149.366: 93.4343% ( 28) 00:08:09.588 12149.366 - 12199.778: 93.6609% ( 28) 00:08:09.588 12199.778 - 12250.191: 93.8714% ( 26) 00:08:09.588 12250.191 - 12300.603: 94.0819% ( 26) 00:08:09.588 12300.603 - 12351.015: 94.2438% ( 20) 00:08:09.588 12351.015 - 12401.428: 94.4462% ( 25) 00:08:09.588 12401.428 - 12451.840: 94.6163% ( 21) 00:08:09.588 12451.840 - 12502.252: 94.7944% ( 22) 00:08:09.588 12502.252 - 12552.665: 94.9806% ( 23) 00:08:09.588 12552.665 - 12603.077: 95.1344% ( 19) 00:08:09.588 12603.077 - 12653.489: 95.2963% ( 20) 00:08:09.588 12653.489 - 12703.902: 95.4177% ( 15) 00:08:09.588 12703.902 - 12754.314: 95.5230% ( 13) 00:08:09.588 12754.314 - 12804.726: 95.6282% ( 13) 00:08:09.588 12804.726 - 12855.138: 95.7497% ( 15) 00:08:09.588 12855.138 - 12905.551: 95.8549% ( 13) 00:08:09.588 12905.551 - 13006.375: 96.1221% ( 33) 00:08:09.588 13006.375 - 13107.200: 96.4135% ( 36) 00:08:09.588 13107.200 - 13208.025: 96.6564% ( 30) 00:08:09.588 13208.025 - 13308.849: 96.8183% ( 20) 00:08:09.588 13308.849 - 13409.674: 96.9560% ( 17) 00:08:09.588 13409.674 - 13510.498: 97.1179% ( 20) 00:08:09.588 13510.498 - 13611.323: 97.2960% ( 22) 00:08:09.588 13611.323 - 13712.148: 97.4579% ( 20) 00:08:09.588 13712.148 - 13812.972: 97.6198% ( 20) 00:08:09.588 13812.972 - 13913.797: 97.7251% ( 13) 00:08:09.588 13913.797 - 14014.622: 97.8060% ( 10) 00:08:09.588 14014.622 - 14115.446: 97.9517% ( 18) 00:08:09.588 14115.446 - 14216.271: 98.1299% ( 22) 00:08:09.588 14216.271 - 14317.095: 98.2432% ( 14) 00:08:09.588 14317.095 - 14417.920: 98.3484% ( 13) 00:08:09.588 14417.920 - 14518.745: 98.4618% ( 14) 00:08:09.588 14518.745 - 14619.569: 98.5751% ( 14) 00:08:09.588 14619.569 - 14720.394: 98.6642% ( 11) 00:08:09.588 14720.394 - 14821.218: 98.7694% ( 13) 00:08:09.588 14821.218 - 14922.043: 98.8585% ( 11) 00:08:09.588 14922.043 - 15022.868: 98.9152% ( 7) 00:08:09.588 15022.868 - 15123.692: 98.9556% ( 5) 00:08:09.588 15123.692 - 15224.517: 98.9637% ( 1) 00:08:09.588 21979.766 - 22080.591: 99.0042% ( 5) 00:08:09.588 22080.591 - 22181.415: 99.0528% ( 6) 00:08:09.588 22181.415 - 22282.240: 99.1095% ( 7) 00:08:09.588 22282.240 - 22383.065: 99.1661% ( 7) 00:08:09.588 22383.065 - 22483.889: 99.2147% ( 6) 00:08:09.588 22483.889 - 22584.714: 99.2714% ( 7) 00:08:09.588 22584.714 - 22685.538: 99.3199% ( 6) 00:08:09.588 22685.538 - 22786.363: 99.3685% ( 6) 00:08:09.588 22786.363 - 22887.188: 99.4009% ( 4) 00:08:09.588 22887.188 - 22988.012: 99.4495% ( 6) 00:08:09.588 22988.012 - 23088.837: 99.4819% ( 4) 00:08:09.588 29037.489 - 29239.138: 99.5304% ( 6) 00:08:09.588 29239.138 - 29440.788: 99.5952% ( 8) 00:08:09.588 29440.788 - 29642.437: 99.6600% ( 8) 00:08:09.588 29642.437 - 29844.086: 99.7247% ( 8) 00:08:09.588 29844.086 - 30045.735: 99.7976% ( 9) 00:08:09.588 30045.735 - 30247.385: 99.8543% ( 7) 00:08:09.588 30247.385 - 30449.034: 99.9352% ( 10) 00:08:09.588 30449.034 - 30650.683: 100.0000% ( 8) 00:08:09.588 00:08:09.588 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:09.588 ============================================================================== 00:08:09.588 Range in us Cumulative IO count 00:08:09.588 4864.788 - 4889.994: 0.0081% ( 1) 00:08:09.588 4889.994 - 4915.200: 0.0243% ( 2) 00:08:09.588 4915.200 - 4940.406: 0.0405% ( 2) 00:08:09.588 4940.406 - 4965.612: 0.0567% ( 2) 00:08:09.588 4965.612 - 4990.818: 0.0729% ( 2) 00:08:09.588 4990.818 - 5016.025: 0.0891% ( 2) 00:08:09.588 5016.025 - 5041.231: 0.1052% ( 2) 00:08:09.588 5041.231 - 5066.437: 0.1214% ( 2) 00:08:09.588 5066.437 - 5091.643: 0.1538% ( 4) 00:08:09.588 5091.643 - 5116.849: 0.1700% ( 2) 00:08:09.588 5116.849 - 5142.055: 0.1862% ( 2) 00:08:09.588 5142.055 - 5167.262: 0.2024% ( 2) 00:08:09.588 5167.262 - 5192.468: 0.2186% ( 2) 00:08:09.588 5192.468 - 5217.674: 0.2429% ( 3) 00:08:09.588 5217.674 - 5242.880: 0.2591% ( 2) 00:08:09.588 5242.880 - 5268.086: 0.2753% ( 2) 00:08:09.588 5268.086 - 5293.292: 0.2995% ( 3) 00:08:09.588 5293.292 - 5318.498: 0.3157% ( 2) 00:08:09.588 5318.498 - 5343.705: 0.3400% ( 3) 00:08:09.588 5343.705 - 5368.911: 0.3562% ( 2) 00:08:09.588 5368.911 - 5394.117: 0.3724% ( 2) 00:08:09.588 5394.117 - 5419.323: 0.3967% ( 3) 00:08:09.588 5419.323 - 5444.529: 0.4129% ( 2) 00:08:09.588 5444.529 - 5469.735: 0.4291% ( 2) 00:08:09.588 5469.735 - 5494.942: 0.4453% ( 2) 00:08:09.588 5494.942 - 5520.148: 0.4696% ( 3) 00:08:09.588 5520.148 - 5545.354: 0.4858% ( 2) 00:08:09.588 5545.354 - 5570.560: 0.5019% ( 2) 00:08:09.588 5570.560 - 5595.766: 0.5181% ( 2) 00:08:09.588 7461.022 - 7511.434: 0.5505% ( 4) 00:08:09.588 7511.434 - 7561.846: 0.5748% ( 3) 00:08:09.588 7561.846 - 7612.258: 0.6072% ( 4) 00:08:09.588 7612.258 - 7662.671: 0.6396% ( 4) 00:08:09.588 7662.671 - 7713.083: 0.6639% ( 3) 00:08:09.588 7713.083 - 7763.495: 0.6962% ( 4) 00:08:09.588 7763.495 - 7813.908: 0.7205% ( 3) 00:08:09.588 7813.908 - 7864.320: 0.7529% ( 4) 00:08:09.588 7864.320 - 7914.732: 0.8582% ( 13) 00:08:09.588 7914.732 - 7965.145: 0.9148% ( 7) 00:08:09.588 7965.145 - 8015.557: 0.9877% ( 9) 00:08:09.588 8015.557 - 8065.969: 1.1091% ( 15) 00:08:09.588 8065.969 - 8116.382: 1.2710% ( 20) 00:08:09.588 8116.382 - 8166.794: 1.4330% ( 20) 00:08:09.588 8166.794 - 8217.206: 1.6435% ( 26) 00:08:09.588 8217.206 - 8267.618: 1.8701% ( 28) 00:08:09.588 8267.618 - 8318.031: 2.0968% ( 28) 00:08:09.588 8318.031 - 8368.443: 2.3154% ( 27) 00:08:09.588 8368.443 - 8418.855: 2.5907% ( 34) 00:08:09.588 8418.855 - 8469.268: 2.8902% ( 37) 00:08:09.588 8469.268 - 8519.680: 3.2626% ( 46) 00:08:09.588 8519.680 - 8570.092: 3.6836% ( 52) 00:08:09.588 8570.092 - 8620.505: 4.1856% ( 62) 00:08:09.588 8620.505 - 8670.917: 4.7766% ( 73) 00:08:09.588 8670.917 - 8721.329: 5.4161% ( 79) 00:08:09.588 8721.329 - 8771.742: 6.1286% ( 88) 00:08:09.588 8771.742 - 8822.154: 7.0758% ( 117) 00:08:09.588 8822.154 - 8872.566: 8.1687% ( 135) 00:08:09.588 8872.566 - 8922.978: 9.4074% ( 153) 00:08:09.588 8922.978 - 8973.391: 10.6137% ( 149) 00:08:09.588 8973.391 - 9023.803: 12.0223% ( 174) 00:08:09.588 9023.803 - 9074.215: 13.6172% ( 197) 00:08:09.588 9074.215 - 9124.628: 15.3416% ( 213) 00:08:09.588 9124.628 - 9175.040: 17.0499% ( 211) 00:08:09.588 9175.040 - 9225.452: 18.9524% ( 235) 00:08:09.588 9225.452 - 9275.865: 20.9845% ( 251) 00:08:09.588 9275.865 - 9326.277: 22.9275% ( 240) 00:08:09.588 9326.277 - 9376.689: 24.8462% ( 237) 00:08:09.588 9376.689 - 9427.102: 26.7892% ( 240) 00:08:09.588 9427.102 - 9477.514: 28.7808% ( 246) 00:08:09.588 9477.514 - 9527.926: 30.8857% ( 260) 00:08:09.588 9527.926 - 9578.338: 32.9825% ( 259) 00:08:09.588 9578.338 - 9628.751: 35.0551% ( 256) 00:08:09.588 9628.751 - 9679.163: 37.0385% ( 245) 00:08:09.588 9679.163 - 9729.575: 39.0058% ( 243) 00:08:09.588 9729.575 - 9779.988: 40.8112% ( 223) 00:08:09.588 9779.988 - 9830.400: 42.7218% ( 236) 00:08:09.588 9830.400 - 9880.812: 44.5110% ( 221) 00:08:09.588 9880.812 - 9931.225: 46.1626% ( 204) 00:08:09.588 9931.225 - 9981.637: 47.9437% ( 220) 00:08:09.588 9981.637 - 10032.049: 49.7247% ( 220) 00:08:09.588 10032.049 - 10082.462: 51.3925% ( 206) 00:08:09.588 10082.462 - 10132.874: 53.0845% ( 209) 00:08:09.588 10132.874 - 10183.286: 54.6875% ( 198) 00:08:09.588 10183.286 - 10233.698: 56.2905% ( 198) 00:08:09.588 10233.698 - 10284.111: 57.8530% ( 193) 00:08:09.588 10284.111 - 10334.523: 59.4722% ( 200) 00:08:09.588 10334.523 - 10384.935: 60.9537% ( 183) 00:08:09.588 10384.935 - 10435.348: 62.5081% ( 192) 00:08:09.588 10435.348 - 10485.760: 64.0625% ( 192) 00:08:09.588 10485.760 - 10536.172: 65.5926% ( 189) 00:08:09.589 10536.172 - 10586.585: 67.0013% ( 174) 00:08:09.589 10586.585 - 10636.997: 68.5071% ( 186) 00:08:09.589 10636.997 - 10687.409: 70.0615% ( 192) 00:08:09.589 10687.409 - 10737.822: 71.6321% ( 194) 00:08:09.589 10737.822 - 10788.234: 72.9922% ( 168) 00:08:09.589 10788.234 - 10838.646: 74.2795% ( 159) 00:08:09.589 10838.646 - 10889.058: 75.4129% ( 140) 00:08:09.589 10889.058 - 10939.471: 76.6435% ( 152) 00:08:09.589 10939.471 - 10989.883: 77.7769% ( 140) 00:08:09.589 10989.883 - 11040.295: 78.8131% ( 128) 00:08:09.589 11040.295 - 11090.708: 79.8575% ( 129) 00:08:09.589 11090.708 - 11141.120: 80.9747% ( 138) 00:08:09.589 11141.120 - 11191.532: 82.0677% ( 135) 00:08:09.589 11191.532 - 11241.945: 83.0878% ( 126) 00:08:09.589 11241.945 - 11292.357: 84.0674% ( 121) 00:08:09.589 11292.357 - 11342.769: 84.9903% ( 114) 00:08:09.589 11342.769 - 11393.182: 85.7675% ( 96) 00:08:09.589 11393.182 - 11443.594: 86.5447% ( 96) 00:08:09.589 11443.594 - 11494.006: 87.2895% ( 92) 00:08:09.589 11494.006 - 11544.418: 87.9615% ( 83) 00:08:09.589 11544.418 - 11594.831: 88.5767% ( 76) 00:08:09.589 11594.831 - 11645.243: 89.1111% ( 66) 00:08:09.589 11645.243 - 11695.655: 89.6211% ( 63) 00:08:09.589 11695.655 - 11746.068: 90.1231% ( 62) 00:08:09.589 11746.068 - 11796.480: 90.5683% ( 55) 00:08:09.589 11796.480 - 11846.892: 91.0217% ( 56) 00:08:09.589 11846.892 - 11897.305: 91.4427% ( 52) 00:08:09.589 11897.305 - 11947.717: 91.8637% ( 52) 00:08:09.589 11947.717 - 11998.129: 92.2037% ( 42) 00:08:09.589 11998.129 - 12048.542: 92.4790% ( 34) 00:08:09.589 12048.542 - 12098.954: 92.7866% ( 38) 00:08:09.589 12098.954 - 12149.366: 93.0699% ( 35) 00:08:09.589 12149.366 - 12199.778: 93.3938% ( 40) 00:08:09.589 12199.778 - 12250.191: 93.6933% ( 37) 00:08:09.589 12250.191 - 12300.603: 93.9362% ( 30) 00:08:09.589 12300.603 - 12351.015: 94.1305% ( 24) 00:08:09.589 12351.015 - 12401.428: 94.3248% ( 24) 00:08:09.589 12401.428 - 12451.840: 94.5191% ( 24) 00:08:09.589 12451.840 - 12502.252: 94.6810% ( 20) 00:08:09.589 12502.252 - 12552.665: 94.8591% ( 22) 00:08:09.589 12552.665 - 12603.077: 95.0453% ( 23) 00:08:09.589 12603.077 - 12653.489: 95.2073% ( 20) 00:08:09.589 12653.489 - 12703.902: 95.3935% ( 23) 00:08:09.589 12703.902 - 12754.314: 95.5878% ( 24) 00:08:09.589 12754.314 - 12804.726: 95.7497% ( 20) 00:08:09.589 12804.726 - 12855.138: 95.8792% ( 16) 00:08:09.589 12855.138 - 12905.551: 96.0087% ( 16) 00:08:09.589 12905.551 - 13006.375: 96.2192% ( 26) 00:08:09.589 13006.375 - 13107.200: 96.3973% ( 22) 00:08:09.589 13107.200 - 13208.025: 96.5755% ( 22) 00:08:09.589 13208.025 - 13308.849: 96.7455% ( 21) 00:08:09.589 13308.849 - 13409.674: 96.9398% ( 24) 00:08:09.589 13409.674 - 13510.498: 97.1665% ( 28) 00:08:09.589 13510.498 - 13611.323: 97.3608% ( 24) 00:08:09.589 13611.323 - 13712.148: 97.5308% ( 21) 00:08:09.589 13712.148 - 13812.972: 97.7089% ( 22) 00:08:09.589 13812.972 - 13913.797: 97.8951% ( 23) 00:08:09.589 13913.797 - 14014.622: 98.0732% ( 22) 00:08:09.589 14014.622 - 14115.446: 98.2108% ( 17) 00:08:09.589 14115.446 - 14216.271: 98.3403% ( 16) 00:08:09.589 14216.271 - 14317.095: 98.4375% ( 12) 00:08:09.589 14317.095 - 14417.920: 98.5104% ( 9) 00:08:09.589 14417.920 - 14518.745: 98.5913% ( 10) 00:08:09.589 14518.745 - 14619.569: 98.6804% ( 11) 00:08:09.589 14619.569 - 14720.394: 98.7209% ( 5) 00:08:09.589 14720.394 - 14821.218: 98.7775% ( 7) 00:08:09.589 14821.218 - 14922.043: 98.8261% ( 6) 00:08:09.589 14922.043 - 15022.868: 98.8828% ( 7) 00:08:09.589 15022.868 - 15123.692: 98.9313% ( 6) 00:08:09.589 15123.692 - 15224.517: 98.9637% ( 4) 00:08:09.589 21878.942 - 21979.766: 98.9718% ( 1) 00:08:09.589 21979.766 - 22080.591: 99.0042% ( 4) 00:08:09.589 22080.591 - 22181.415: 99.0447% ( 5) 00:08:09.589 22181.415 - 22282.240: 99.0852% ( 5) 00:08:09.589 22282.240 - 22383.065: 99.1256% ( 5) 00:08:09.589 22383.065 - 22483.889: 99.1742% ( 6) 00:08:09.589 22483.889 - 22584.714: 99.2147% ( 5) 00:08:09.589 22584.714 - 22685.538: 99.2552% ( 5) 00:08:09.589 22685.538 - 22786.363: 99.2957% ( 5) 00:08:09.589 22786.363 - 22887.188: 99.3442% ( 6) 00:08:09.589 22887.188 - 22988.012: 99.4009% ( 7) 00:08:09.589 22988.012 - 23088.837: 99.4495% ( 6) 00:08:09.589 23088.837 - 23189.662: 99.4819% ( 4) 00:08:09.589 29037.489 - 29239.138: 99.4981% ( 2) 00:08:09.589 29239.138 - 29440.788: 99.5628% ( 8) 00:08:09.589 29440.788 - 29642.437: 99.6276% ( 8) 00:08:09.589 29642.437 - 29844.086: 99.7005% ( 9) 00:08:09.589 29844.086 - 30045.735: 99.7733% ( 9) 00:08:09.589 30045.735 - 30247.385: 99.8381% ( 8) 00:08:09.589 30247.385 - 30449.034: 99.9028% ( 8) 00:08:09.589 30449.034 - 30650.683: 99.9676% ( 8) 00:08:09.589 30650.683 - 30852.332: 100.0000% ( 4) 00:08:09.589 00:08:09.589 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:09.589 ============================================================================== 00:08:09.589 Range in us Cumulative IO count 00:08:09.589 4511.902 - 4537.108: 0.0243% ( 3) 00:08:09.589 4537.108 - 4562.314: 0.0405% ( 2) 00:08:09.589 4562.314 - 4587.520: 0.0567% ( 2) 00:08:09.589 4587.520 - 4612.726: 0.0729% ( 2) 00:08:09.589 4612.726 - 4637.932: 0.0891% ( 2) 00:08:09.589 4637.932 - 4663.138: 0.1133% ( 3) 00:08:09.589 4663.138 - 4688.345: 0.1376% ( 3) 00:08:09.589 4688.345 - 4713.551: 0.1538% ( 2) 00:08:09.589 4713.551 - 4738.757: 0.1700% ( 2) 00:08:09.589 4738.757 - 4763.963: 0.1943% ( 3) 00:08:09.589 4763.963 - 4789.169: 0.2105% ( 2) 00:08:09.589 4789.169 - 4814.375: 0.2267% ( 2) 00:08:09.589 4814.375 - 4839.582: 0.2429% ( 2) 00:08:09.589 4839.582 - 4864.788: 0.2672% ( 3) 00:08:09.589 4864.788 - 4889.994: 0.2834% ( 2) 00:08:09.589 4889.994 - 4915.200: 0.2995% ( 2) 00:08:09.589 4915.200 - 4940.406: 0.3157% ( 2) 00:08:09.589 4940.406 - 4965.612: 0.3400% ( 3) 00:08:09.589 4965.612 - 4990.818: 0.3562% ( 2) 00:08:09.589 4990.818 - 5016.025: 0.3724% ( 2) 00:08:09.589 5016.025 - 5041.231: 0.3886% ( 2) 00:08:09.589 5041.231 - 5066.437: 0.4129% ( 3) 00:08:09.589 5066.437 - 5091.643: 0.4291% ( 2) 00:08:09.589 5091.643 - 5116.849: 0.4453% ( 2) 00:08:09.589 5116.849 - 5142.055: 0.4615% ( 2) 00:08:09.589 5142.055 - 5167.262: 0.4777% ( 2) 00:08:09.589 5167.262 - 5192.468: 0.5019% ( 3) 00:08:09.589 5192.468 - 5217.674: 0.5100% ( 1) 00:08:09.589 5217.674 - 5242.880: 0.5181% ( 1) 00:08:09.589 7461.022 - 7511.434: 0.6072% ( 11) 00:08:09.589 7511.434 - 7561.846: 0.6558% ( 6) 00:08:09.589 7561.846 - 7612.258: 0.6881% ( 4) 00:08:09.589 7612.258 - 7662.671: 0.7043% ( 2) 00:08:09.589 7662.671 - 7713.083: 0.7448% ( 5) 00:08:09.589 7713.083 - 7763.495: 0.7772% ( 4) 00:08:09.589 7763.495 - 7813.908: 0.8177% ( 5) 00:08:09.589 7813.908 - 7864.320: 0.8582% ( 5) 00:08:09.589 7864.320 - 7914.732: 0.9148% ( 7) 00:08:09.589 7914.732 - 7965.145: 1.0201% ( 13) 00:08:09.589 7965.145 - 8015.557: 1.1091% ( 11) 00:08:09.589 8015.557 - 8065.969: 1.2144% ( 13) 00:08:09.589 8065.969 - 8116.382: 1.3277% ( 14) 00:08:09.589 8116.382 - 8166.794: 1.4168% ( 11) 00:08:09.589 8166.794 - 8217.206: 1.5868% ( 21) 00:08:09.589 8217.206 - 8267.618: 1.7487% ( 20) 00:08:09.589 8267.618 - 8318.031: 2.0563% ( 38) 00:08:09.589 8318.031 - 8368.443: 2.2992% ( 30) 00:08:09.589 8368.443 - 8418.855: 2.5502% ( 31) 00:08:09.589 8418.855 - 8469.268: 2.8740% ( 40) 00:08:09.589 8469.268 - 8519.680: 3.1493% ( 34) 00:08:09.589 8519.680 - 8570.092: 3.4812% ( 41) 00:08:09.589 8570.092 - 8620.505: 4.0074% ( 65) 00:08:09.589 8620.505 - 8670.917: 4.5418% ( 66) 00:08:09.589 8670.917 - 8721.329: 5.1247% ( 72) 00:08:09.589 8721.329 - 8771.742: 5.8695% ( 92) 00:08:09.589 8771.742 - 8822.154: 6.8248% ( 118) 00:08:09.589 8822.154 - 8872.566: 7.9339% ( 137) 00:08:09.589 8872.566 - 8922.978: 9.2617% ( 164) 00:08:09.589 8922.978 - 8973.391: 10.6218% ( 168) 00:08:09.589 8973.391 - 9023.803: 12.1195% ( 185) 00:08:09.589 9023.803 - 9074.215: 13.8520% ( 214) 00:08:09.589 9074.215 - 9124.628: 15.5036% ( 204) 00:08:09.589 9124.628 - 9175.040: 17.3089% ( 223) 00:08:09.589 9175.040 - 9225.452: 19.1953% ( 233) 00:08:09.589 9225.452 - 9275.865: 21.0978% ( 235) 00:08:09.589 9275.865 - 9326.277: 23.0732% ( 244) 00:08:09.589 9326.277 - 9376.689: 24.9757% ( 235) 00:08:09.589 9376.689 - 9427.102: 26.9106% ( 239) 00:08:09.589 9427.102 - 9477.514: 28.9184% ( 248) 00:08:09.590 9477.514 - 9527.926: 30.8938% ( 244) 00:08:09.590 9527.926 - 9578.338: 33.0554% ( 267) 00:08:09.590 9578.338 - 9628.751: 35.2413% ( 270) 00:08:09.590 9628.751 - 9679.163: 37.3057% ( 255) 00:08:09.590 9679.163 - 9729.575: 39.4106% ( 260) 00:08:09.590 9729.575 - 9779.988: 41.2484% ( 227) 00:08:09.590 9779.988 - 9830.400: 42.9647% ( 212) 00:08:09.590 9830.400 - 9880.812: 44.6648% ( 210) 00:08:09.590 9880.812 - 9931.225: 46.3488% ( 208) 00:08:09.590 9931.225 - 9981.637: 48.0651% ( 212) 00:08:09.590 9981.637 - 10032.049: 49.8786% ( 224) 00:08:09.590 10032.049 - 10082.462: 51.5058% ( 201) 00:08:09.590 10082.462 - 10132.874: 53.0602% ( 192) 00:08:09.590 10132.874 - 10183.286: 54.7766% ( 212) 00:08:09.590 10183.286 - 10233.698: 56.5657% ( 221) 00:08:09.590 10233.698 - 10284.111: 58.3144% ( 216) 00:08:09.590 10284.111 - 10334.523: 60.0146% ( 210) 00:08:09.590 10334.523 - 10384.935: 61.5609% ( 191) 00:08:09.590 10384.935 - 10435.348: 63.3258% ( 218) 00:08:09.590 10435.348 - 10485.760: 64.9288% ( 198) 00:08:09.590 10485.760 - 10536.172: 66.5155% ( 196) 00:08:09.590 10536.172 - 10586.585: 67.9647% ( 179) 00:08:09.590 10586.585 - 10636.997: 69.3734% ( 174) 00:08:09.590 10636.997 - 10687.409: 70.8792% ( 186) 00:08:09.590 10687.409 - 10737.822: 72.2960% ( 175) 00:08:09.590 10737.822 - 10788.234: 73.5589% ( 156) 00:08:09.590 10788.234 - 10838.646: 74.8624% ( 161) 00:08:09.590 10838.646 - 10889.058: 76.2387% ( 170) 00:08:09.590 10889.058 - 10939.471: 77.4854% ( 154) 00:08:09.590 10939.471 - 10989.883: 78.6108% ( 139) 00:08:09.590 10989.883 - 11040.295: 79.7442% ( 140) 00:08:09.590 11040.295 - 11090.708: 80.9181% ( 145) 00:08:09.590 11090.708 - 11141.120: 82.0758% ( 143) 00:08:09.590 11141.120 - 11191.532: 83.0959% ( 126) 00:08:09.590 11191.532 - 11241.945: 83.9378% ( 104) 00:08:09.590 11241.945 - 11292.357: 84.8041% ( 107) 00:08:09.590 11292.357 - 11342.769: 85.5732% ( 95) 00:08:09.590 11342.769 - 11393.182: 86.2451% ( 83) 00:08:09.590 11393.182 - 11443.594: 86.8604% ( 76) 00:08:09.590 11443.594 - 11494.006: 87.5000% ( 79) 00:08:09.590 11494.006 - 11544.418: 88.0991% ( 74) 00:08:09.590 11544.418 - 11594.831: 88.6253% ( 65) 00:08:09.590 11594.831 - 11645.243: 89.0868% ( 57) 00:08:09.590 11645.243 - 11695.655: 89.5483% ( 57) 00:08:09.590 11695.655 - 11746.068: 90.0097% ( 57) 00:08:09.590 11746.068 - 11796.480: 90.4226% ( 51) 00:08:09.590 11796.480 - 11846.892: 90.9165% ( 61) 00:08:09.590 11846.892 - 11897.305: 91.3617% ( 55) 00:08:09.590 11897.305 - 11947.717: 91.8151% ( 56) 00:08:09.590 11947.717 - 11998.129: 92.2361% ( 52) 00:08:09.590 11998.129 - 12048.542: 92.5599% ( 40) 00:08:09.590 12048.542 - 12098.954: 92.8514% ( 36) 00:08:09.590 12098.954 - 12149.366: 93.0619% ( 26) 00:08:09.590 12149.366 - 12199.778: 93.2481% ( 23) 00:08:09.590 12199.778 - 12250.191: 93.4585% ( 26) 00:08:09.590 12250.191 - 12300.603: 93.6286% ( 21) 00:08:09.590 12300.603 - 12351.015: 93.8229% ( 24) 00:08:09.590 12351.015 - 12401.428: 93.9848% ( 20) 00:08:09.590 12401.428 - 12451.840: 94.1629% ( 22) 00:08:09.590 12451.840 - 12502.252: 94.2843% ( 15) 00:08:09.590 12502.252 - 12552.665: 94.4624% ( 22) 00:08:09.590 12552.665 - 12603.077: 94.6244% ( 20) 00:08:09.590 12603.077 - 12653.489: 94.8429% ( 27) 00:08:09.590 12653.489 - 12703.902: 95.0130% ( 21) 00:08:09.590 12703.902 - 12754.314: 95.1911% ( 22) 00:08:09.590 12754.314 - 12804.726: 95.3125% ( 15) 00:08:09.590 12804.726 - 12855.138: 95.4097% ( 12) 00:08:09.590 12855.138 - 12905.551: 95.5230% ( 14) 00:08:09.590 12905.551 - 13006.375: 95.8063% ( 35) 00:08:09.590 13006.375 - 13107.200: 96.1059% ( 37) 00:08:09.590 13107.200 - 13208.025: 96.3407% ( 29) 00:08:09.590 13208.025 - 13308.849: 96.6483% ( 38) 00:08:09.590 13308.849 - 13409.674: 96.9236% ( 34) 00:08:09.590 13409.674 - 13510.498: 97.1422% ( 27) 00:08:09.590 13510.498 - 13611.323: 97.3203% ( 22) 00:08:09.590 13611.323 - 13712.148: 97.5146% ( 24) 00:08:09.590 13712.148 - 13812.972: 97.7008% ( 23) 00:08:09.590 13812.972 - 13913.797: 97.8627% ( 20) 00:08:09.590 13913.797 - 14014.622: 97.9922% ( 16) 00:08:09.590 14014.622 - 14115.446: 98.1460% ( 19) 00:08:09.590 14115.446 - 14216.271: 98.3080% ( 20) 00:08:09.590 14216.271 - 14317.095: 98.4456% ( 17) 00:08:09.590 14317.095 - 14417.920: 98.5751% ( 16) 00:08:09.590 14417.920 - 14518.745: 98.6723% ( 12) 00:08:09.590 14518.745 - 14619.569: 98.7290% ( 7) 00:08:09.590 14619.569 - 14720.394: 98.7856% ( 7) 00:08:09.590 14720.394 - 14821.218: 98.8423% ( 7) 00:08:09.590 14821.218 - 14922.043: 98.8990% ( 7) 00:08:09.590 14922.043 - 15022.868: 98.9556% ( 7) 00:08:09.590 15022.868 - 15123.692: 98.9637% ( 1) 00:08:09.590 21677.292 - 21778.117: 98.9718% ( 1) 00:08:09.590 21778.117 - 21878.942: 98.9961% ( 3) 00:08:09.590 21878.942 - 21979.766: 99.0204% ( 3) 00:08:09.590 21979.766 - 22080.591: 99.0528% ( 4) 00:08:09.590 22080.591 - 22181.415: 99.0771% ( 3) 00:08:09.590 22181.415 - 22282.240: 99.1014% ( 3) 00:08:09.590 22282.240 - 22383.065: 99.1256% ( 3) 00:08:09.590 22383.065 - 22483.889: 99.1499% ( 3) 00:08:09.590 22483.889 - 22584.714: 99.1823% ( 4) 00:08:09.590 22584.714 - 22685.538: 99.2066% ( 3) 00:08:09.590 22685.538 - 22786.363: 99.2390% ( 4) 00:08:09.590 22786.363 - 22887.188: 99.2633% ( 3) 00:08:09.590 22887.188 - 22988.012: 99.2957% ( 4) 00:08:09.590 22988.012 - 23088.837: 99.3280% ( 4) 00:08:09.590 23088.837 - 23189.662: 99.3442% ( 2) 00:08:09.590 23189.662 - 23290.486: 99.3847% ( 5) 00:08:09.590 23290.486 - 23391.311: 99.4171% ( 4) 00:08:09.590 23391.311 - 23492.135: 99.4495% ( 4) 00:08:09.590 23492.135 - 23592.960: 99.4819% ( 4) 00:08:09.590 28835.840 - 29037.489: 99.5223% ( 5) 00:08:09.590 29037.489 - 29239.138: 99.6033% ( 10) 00:08:09.590 29239.138 - 29440.788: 99.6681% ( 8) 00:08:09.590 29440.788 - 29642.437: 99.7409% ( 9) 00:08:09.590 29642.437 - 29844.086: 99.8057% ( 8) 00:08:09.590 29844.086 - 30045.735: 99.8786% ( 9) 00:08:09.590 30045.735 - 30247.385: 99.9514% ( 9) 00:08:09.590 30247.385 - 30449.034: 100.0000% ( 6) 00:08:09.590 00:08:09.590 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:09.590 ============================================================================== 00:08:09.590 Range in us Cumulative IO count 00:08:09.590 4083.397 - 4108.603: 0.0242% ( 3) 00:08:09.590 4108.603 - 4133.809: 0.0403% ( 2) 00:08:09.590 4133.809 - 4159.015: 0.0564% ( 2) 00:08:09.590 4159.015 - 4184.222: 0.0725% ( 2) 00:08:09.590 4184.222 - 4209.428: 0.0886% ( 2) 00:08:09.590 4209.428 - 4234.634: 0.0966% ( 1) 00:08:09.590 4234.634 - 4259.840: 0.1128% ( 2) 00:08:09.590 4259.840 - 4285.046: 0.1289% ( 2) 00:08:09.590 4285.046 - 4310.252: 0.1530% ( 3) 00:08:09.590 4310.252 - 4335.458: 0.1691% ( 2) 00:08:09.590 4335.458 - 4360.665: 0.1852% ( 2) 00:08:09.590 4360.665 - 4385.871: 0.2014% ( 2) 00:08:09.590 4385.871 - 4411.077: 0.2175% ( 2) 00:08:09.590 4411.077 - 4436.283: 0.2336% ( 2) 00:08:09.590 4436.283 - 4461.489: 0.2577% ( 3) 00:08:09.590 4461.489 - 4486.695: 0.2738% ( 2) 00:08:09.590 4486.695 - 4511.902: 0.2899% ( 2) 00:08:09.590 4511.902 - 4537.108: 0.3061% ( 2) 00:08:09.590 4537.108 - 4562.314: 0.3222% ( 2) 00:08:09.590 4562.314 - 4587.520: 0.3383% ( 2) 00:08:09.590 4587.520 - 4612.726: 0.3624% ( 3) 00:08:09.590 4612.726 - 4637.932: 0.3785% ( 2) 00:08:09.590 4637.932 - 4663.138: 0.3947% ( 2) 00:08:09.590 4663.138 - 4688.345: 0.4108% ( 2) 00:08:09.590 4688.345 - 4713.551: 0.4269% ( 2) 00:08:09.590 4713.551 - 4738.757: 0.4510% ( 3) 00:08:09.590 4738.757 - 4763.963: 0.4671% ( 2) 00:08:09.590 4763.963 - 4789.169: 0.4832% ( 2) 00:08:09.590 4789.169 - 4814.375: 0.4994% ( 2) 00:08:09.590 4814.375 - 4839.582: 0.5155% ( 2) 00:08:09.590 6956.898 - 7007.311: 0.5477% ( 4) 00:08:09.590 7007.311 - 7057.723: 0.5799% ( 4) 00:08:09.590 7057.723 - 7108.135: 0.6121% ( 4) 00:08:09.590 7108.135 - 7158.548: 0.6524% ( 5) 00:08:09.590 7158.548 - 7208.960: 0.6927% ( 5) 00:08:09.590 7208.960 - 7259.372: 0.7329% ( 5) 00:08:09.590 7259.372 - 7309.785: 0.7651% ( 4) 00:08:09.590 7309.785 - 7360.197: 0.7974% ( 4) 00:08:09.590 7360.197 - 7410.609: 0.8376% ( 5) 00:08:09.590 7410.609 - 7461.022: 0.8698% ( 4) 00:08:09.590 7461.022 - 7511.434: 0.9101% ( 5) 00:08:09.590 7511.434 - 7561.846: 0.9423% ( 4) 00:08:09.590 7561.846 - 7612.258: 0.9745% ( 4) 00:08:09.590 7612.258 - 7662.671: 1.0068% ( 4) 00:08:09.590 7662.671 - 7713.083: 1.0309% ( 3) 00:08:09.590 7864.320 - 7914.732: 1.0470% ( 2) 00:08:09.590 7914.732 - 7965.145: 1.0712% ( 3) 00:08:09.590 7965.145 - 8015.557: 1.2081% ( 17) 00:08:09.590 8015.557 - 8065.969: 1.2726% ( 8) 00:08:09.590 8065.969 - 8116.382: 1.3048% ( 4) 00:08:09.590 8116.382 - 8166.794: 1.3611% ( 7) 00:08:09.590 8166.794 - 8217.206: 1.4820% ( 15) 00:08:09.590 8217.206 - 8267.618: 1.6350% ( 19) 00:08:09.590 8267.618 - 8318.031: 1.7639% ( 16) 00:08:09.590 8318.031 - 8368.443: 1.9249% ( 20) 00:08:09.590 8368.443 - 8418.855: 2.1343% ( 26) 00:08:09.590 8418.855 - 8469.268: 2.3438% ( 26) 00:08:09.590 8469.268 - 8519.680: 2.5612% ( 27) 00:08:09.590 8519.680 - 8570.092: 2.8512% ( 36) 00:08:09.590 8570.092 - 8620.505: 3.3505% ( 62) 00:08:09.591 8620.505 - 8670.917: 3.9546% ( 75) 00:08:09.591 8670.917 - 8721.329: 4.5909% ( 79) 00:08:09.591 8721.329 - 8771.742: 5.3963% ( 100) 00:08:09.591 8771.742 - 8822.154: 6.4433% ( 130) 00:08:09.591 8822.154 - 8872.566: 7.6836% ( 154) 00:08:09.591 8872.566 - 8922.978: 8.9481% ( 157) 00:08:09.591 8922.978 - 8973.391: 10.3012% ( 168) 00:08:09.591 8973.391 - 9023.803: 11.7671% ( 182) 00:08:09.591 9023.803 - 9074.215: 13.2168% ( 180) 00:08:09.591 9074.215 - 9124.628: 14.8760% ( 206) 00:08:09.591 9124.628 - 9175.040: 16.6962% ( 226) 00:08:09.591 9175.040 - 9225.452: 18.5084% ( 225) 00:08:09.591 9225.452 - 9275.865: 20.4575% ( 242) 00:08:09.591 9275.865 - 9326.277: 22.5999% ( 266) 00:08:09.591 9326.277 - 9376.689: 24.6939% ( 260) 00:08:09.591 9376.689 - 9427.102: 26.7397% ( 254) 00:08:09.591 9427.102 - 9477.514: 28.7613% ( 251) 00:08:09.591 9477.514 - 9527.926: 30.8956% ( 265) 00:08:09.591 9527.926 - 9578.338: 33.0461% ( 267) 00:08:09.591 9578.338 - 9628.751: 35.1885% ( 266) 00:08:09.591 9628.751 - 9679.163: 37.1134% ( 239) 00:08:09.591 9679.163 - 9729.575: 39.0786% ( 244) 00:08:09.591 9729.575 - 9779.988: 40.9552% ( 233) 00:08:09.591 9779.988 - 9830.400: 42.9688% ( 250) 00:08:09.591 9830.400 - 9880.812: 44.8212% ( 230) 00:08:09.591 9880.812 - 9931.225: 46.7784% ( 243) 00:08:09.591 9931.225 - 9981.637: 48.6227% ( 229) 00:08:09.591 9981.637 - 10032.049: 50.4591% ( 228) 00:08:09.591 10032.049 - 10082.462: 52.1988% ( 216) 00:08:09.591 10082.462 - 10132.874: 53.9707% ( 220) 00:08:09.591 10132.874 - 10183.286: 55.7668% ( 223) 00:08:09.591 10183.286 - 10233.698: 57.4098% ( 204) 00:08:09.591 10233.698 - 10284.111: 59.0448% ( 203) 00:08:09.591 10284.111 - 10334.523: 60.6153% ( 195) 00:08:09.591 10334.523 - 10384.935: 62.2181% ( 199) 00:08:09.591 10384.935 - 10435.348: 63.7403% ( 189) 00:08:09.591 10435.348 - 10485.760: 65.3270% ( 197) 00:08:09.591 10485.760 - 10536.172: 66.8090% ( 184) 00:08:09.591 10536.172 - 10586.585: 68.4198% ( 200) 00:08:09.591 10586.585 - 10636.997: 69.9823% ( 194) 00:08:09.591 10636.997 - 10687.409: 71.4884% ( 187) 00:08:09.591 10687.409 - 10737.822: 72.7932% ( 162) 00:08:09.591 10737.822 - 10788.234: 74.0255% ( 153) 00:08:09.591 10788.234 - 10838.646: 75.2577% ( 153) 00:08:09.591 10838.646 - 10889.058: 76.4578% ( 149) 00:08:09.591 10889.058 - 10939.471: 77.6659% ( 150) 00:08:09.591 10939.471 - 10989.883: 78.7693% ( 137) 00:08:09.591 10989.883 - 11040.295: 79.8244% ( 131) 00:08:09.591 11040.295 - 11090.708: 80.8070% ( 122) 00:08:09.591 11090.708 - 11141.120: 81.8299% ( 127) 00:08:09.591 11141.120 - 11191.532: 82.6917% ( 107) 00:08:09.591 11191.532 - 11241.945: 83.4649% ( 96) 00:08:09.591 11241.945 - 11292.357: 84.2864% ( 102) 00:08:09.591 11292.357 - 11342.769: 85.0032% ( 89) 00:08:09.591 11342.769 - 11393.182: 85.5912% ( 73) 00:08:09.591 11393.182 - 11443.594: 86.2597% ( 83) 00:08:09.591 11443.594 - 11494.006: 87.0168% ( 94) 00:08:09.591 11494.006 - 11544.418: 87.6530% ( 79) 00:08:09.591 11544.418 - 11594.831: 88.2571% ( 75) 00:08:09.591 11594.831 - 11645.243: 88.8531% ( 74) 00:08:09.591 11645.243 - 11695.655: 89.4491% ( 74) 00:08:09.591 11695.655 - 11746.068: 90.0209% ( 71) 00:08:09.591 11746.068 - 11796.480: 90.5364% ( 64) 00:08:09.591 11796.480 - 11846.892: 90.9794% ( 55) 00:08:09.591 11846.892 - 11897.305: 91.3418% ( 45) 00:08:09.591 11897.305 - 11947.717: 91.7043% ( 45) 00:08:09.591 11947.717 - 11998.129: 92.0345% ( 41) 00:08:09.591 11998.129 - 12048.542: 92.3164% ( 35) 00:08:09.591 12048.542 - 12098.954: 92.5983% ( 35) 00:08:09.591 12098.954 - 12149.366: 92.8077% ( 26) 00:08:09.591 12149.366 - 12199.778: 93.0735% ( 33) 00:08:09.591 12199.778 - 12250.191: 93.3070% ( 29) 00:08:09.591 12250.191 - 12300.603: 93.5648% ( 32) 00:08:09.591 12300.603 - 12351.015: 93.8064% ( 30) 00:08:09.591 12351.015 - 12401.428: 93.9675% ( 20) 00:08:09.591 12401.428 - 12451.840: 94.1285% ( 20) 00:08:09.591 12451.840 - 12502.252: 94.2735% ( 18) 00:08:09.591 12502.252 - 12552.665: 94.3782% ( 13) 00:08:09.591 12552.665 - 12603.077: 94.4668% ( 11) 00:08:09.591 12603.077 - 12653.489: 94.5554% ( 11) 00:08:09.591 12653.489 - 12703.902: 94.6601% ( 13) 00:08:09.591 12703.902 - 12754.314: 94.7487% ( 11) 00:08:09.591 12754.314 - 12804.726: 94.8695% ( 15) 00:08:09.591 12804.726 - 12855.138: 95.0306% ( 20) 00:08:09.591 12855.138 - 12905.551: 95.2320% ( 25) 00:08:09.591 12905.551 - 13006.375: 95.5783% ( 43) 00:08:09.591 13006.375 - 13107.200: 95.9729% ( 49) 00:08:09.591 13107.200 - 13208.025: 96.2709% ( 37) 00:08:09.591 13208.025 - 13308.849: 96.6012% ( 41) 00:08:09.591 13308.849 - 13409.674: 96.8911% ( 36) 00:08:09.591 13409.674 - 13510.498: 97.1811% ( 36) 00:08:09.591 13510.498 - 13611.323: 97.4630% ( 35) 00:08:09.591 13611.323 - 13712.148: 97.7368% ( 34) 00:08:09.591 13712.148 - 13812.972: 97.9301% ( 24) 00:08:09.591 13812.972 - 13913.797: 98.0187% ( 11) 00:08:09.591 13913.797 - 14014.622: 98.0912% ( 9) 00:08:09.591 14014.622 - 14115.446: 98.1637% ( 9) 00:08:09.591 14115.446 - 14216.271: 98.2442% ( 10) 00:08:09.591 14216.271 - 14317.095: 98.3086% ( 8) 00:08:09.591 14317.095 - 14417.920: 98.3409% ( 4) 00:08:09.591 14417.920 - 14518.745: 98.4053% ( 8) 00:08:09.591 14518.745 - 14619.569: 98.4858% ( 10) 00:08:09.591 14619.569 - 14720.394: 98.5825% ( 12) 00:08:09.591 14720.394 - 14821.218: 98.6711% ( 11) 00:08:09.591 14821.218 - 14922.043: 98.7355% ( 8) 00:08:09.591 14922.043 - 15022.868: 98.7838% ( 6) 00:08:09.591 15022.868 - 15123.692: 98.8322% ( 6) 00:08:09.591 15123.692 - 15224.517: 98.8885% ( 7) 00:08:09.591 15224.517 - 15325.342: 98.9369% ( 6) 00:08:09.591 15325.342 - 15426.166: 98.9691% ( 4) 00:08:09.591 16232.763 - 16333.588: 99.0013% ( 4) 00:08:09.591 16333.588 - 16434.412: 99.0416% ( 5) 00:08:09.591 16434.412 - 16535.237: 99.0818% ( 5) 00:08:09.591 16535.237 - 16636.062: 99.1221% ( 5) 00:08:09.591 16636.062 - 16736.886: 99.1704% ( 6) 00:08:09.591 16736.886 - 16837.711: 99.2107% ( 5) 00:08:09.591 16837.711 - 16938.535: 99.2590% ( 6) 00:08:09.591 16938.535 - 17039.360: 99.3154% ( 7) 00:08:09.591 17039.360 - 17140.185: 99.3718% ( 7) 00:08:09.591 17140.185 - 17241.009: 99.4201% ( 6) 00:08:09.591 17241.009 - 17341.834: 99.4765% ( 7) 00:08:09.591 17341.834 - 17442.658: 99.4845% ( 1) 00:08:09.591 22383.065 - 22483.889: 99.5006% ( 2) 00:08:09.591 22483.889 - 22584.714: 99.5248% ( 3) 00:08:09.591 22584.714 - 22685.538: 99.5651% ( 5) 00:08:09.591 22685.538 - 22786.363: 99.6053% ( 5) 00:08:09.591 22786.363 - 22887.188: 99.6376% ( 4) 00:08:09.591 22887.188 - 22988.012: 99.6778% ( 5) 00:08:09.591 22988.012 - 23088.837: 99.7181% ( 5) 00:08:09.591 23088.837 - 23189.662: 99.7584% ( 5) 00:08:09.591 23189.662 - 23290.486: 99.7906% ( 4) 00:08:09.591 23290.486 - 23391.311: 99.8309% ( 5) 00:08:09.591 23391.311 - 23492.135: 99.8711% ( 5) 00:08:09.591 23492.135 - 23592.960: 99.9034% ( 4) 00:08:09.591 23592.960 - 23693.785: 99.9436% ( 5) 00:08:09.591 23693.785 - 23794.609: 99.9758% ( 4) 00:08:09.591 23794.609 - 23895.434: 100.0000% ( 3) 00:08:09.591 00:08:09.591 23:56:59 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:10.526 Initializing NVMe Controllers 00:08:10.526 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:10.527 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:10.527 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:10.527 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:10.527 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:10.527 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:10.527 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:10.527 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:10.527 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:10.527 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:10.527 Initialization complete. Launching workers. 00:08:10.527 ======================================================== 00:08:10.527 Latency(us) 00:08:10.527 Device Information : IOPS MiB/s Average min max 00:08:10.527 PCIE (0000:00:13.0) NSID 1 from core 0: 13190.29 154.57 9710.21 6629.37 26971.49 00:08:10.527 PCIE (0000:00:10.0) NSID 1 from core 0: 13190.29 154.57 9703.37 5809.05 26454.52 00:08:10.527 PCIE (0000:00:11.0) NSID 1 from core 0: 13190.29 154.57 9695.24 5707.92 25943.44 00:08:10.527 PCIE (0000:00:12.0) NSID 1 from core 0: 13190.29 154.57 9687.17 4537.63 26380.23 00:08:10.527 PCIE (0000:00:12.0) NSID 2 from core 0: 13190.29 154.57 9679.47 4359.45 26283.80 00:08:10.527 PCIE (0000:00:12.0) NSID 3 from core 0: 13254.01 155.32 9625.26 4022.27 20687.11 00:08:10.527 ======================================================== 00:08:10.527 Total : 79205.48 928.19 9683.41 4022.27 26971.49 00:08:10.527 00:08:10.527 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:10.527 ================================================================================= 00:08:10.527 1.00000% : 7813.908us 00:08:10.527 10.00000% : 8469.268us 00:08:10.527 25.00000% : 8973.391us 00:08:10.527 50.00000% : 9376.689us 00:08:10.527 75.00000% : 9981.637us 00:08:10.527 90.00000% : 11292.357us 00:08:10.527 95.00000% : 12098.954us 00:08:10.527 98.00000% : 13712.148us 00:08:10.527 99.00000% : 15022.868us 00:08:10.527 99.50000% : 20870.695us 00:08:10.527 99.90000% : 26819.348us 00:08:10.527 99.99000% : 27020.997us 00:08:10.527 99.99900% : 27020.997us 00:08:10.527 99.99990% : 27020.997us 00:08:10.527 99.99999% : 27020.997us 00:08:10.527 00:08:10.527 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:10.527 ================================================================================= 00:08:10.527 1.00000% : 7662.671us 00:08:10.527 10.00000% : 8469.268us 00:08:10.527 25.00000% : 8922.978us 00:08:10.527 50.00000% : 9376.689us 00:08:10.527 75.00000% : 10032.049us 00:08:10.527 90.00000% : 11241.945us 00:08:10.527 95.00000% : 12250.191us 00:08:10.527 98.00000% : 13611.323us 00:08:10.527 99.00000% : 15224.517us 00:08:10.527 99.50000% : 20971.520us 00:08:10.527 99.90000% : 26416.049us 00:08:10.527 99.99000% : 26617.698us 00:08:10.527 99.99900% : 26617.698us 00:08:10.527 99.99990% : 26617.698us 00:08:10.527 99.99999% : 26617.698us 00:08:10.527 00:08:10.527 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:10.527 ================================================================================= 00:08:10.527 1.00000% : 7813.908us 00:08:10.527 10.00000% : 8519.680us 00:08:10.527 25.00000% : 8922.978us 00:08:10.527 50.00000% : 9376.689us 00:08:10.527 75.00000% : 9981.637us 00:08:10.527 90.00000% : 11191.532us 00:08:10.527 95.00000% : 12098.954us 00:08:10.527 98.00000% : 13409.674us 00:08:10.527 99.00000% : 15325.342us 00:08:10.527 99.50000% : 20669.046us 00:08:10.527 99.90000% : 25811.102us 00:08:10.527 99.99000% : 26012.751us 00:08:10.527 99.99900% : 26012.751us 00:08:10.527 99.99990% : 26012.751us 00:08:10.527 99.99999% : 26012.751us 00:08:10.527 00:08:10.527 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:10.527 ================================================================================= 00:08:10.527 1.00000% : 7713.083us 00:08:10.527 10.00000% : 8418.855us 00:08:10.527 25.00000% : 8922.978us 00:08:10.527 50.00000% : 9376.689us 00:08:10.527 75.00000% : 10032.049us 00:08:10.527 90.00000% : 11090.708us 00:08:10.527 95.00000% : 12250.191us 00:08:10.527 98.00000% : 13611.323us 00:08:10.527 99.00000% : 14619.569us 00:08:10.527 99.50000% : 20870.695us 00:08:10.527 99.90000% : 26214.400us 00:08:10.527 99.99000% : 26416.049us 00:08:10.527 99.99900% : 26416.049us 00:08:10.527 99.99990% : 26416.049us 00:08:10.527 99.99999% : 26416.049us 00:08:10.527 00:08:10.527 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:10.527 ================================================================================= 00:08:10.527 1.00000% : 7612.258us 00:08:10.527 10.00000% : 8469.268us 00:08:10.527 25.00000% : 8922.978us 00:08:10.527 50.00000% : 9376.689us 00:08:10.527 75.00000% : 9981.637us 00:08:10.527 90.00000% : 11090.708us 00:08:10.527 95.00000% : 12199.778us 00:08:10.527 98.00000% : 13812.972us 00:08:10.527 99.00000% : 14619.569us 00:08:10.527 99.50000% : 20467.397us 00:08:10.527 99.90000% : 26214.400us 00:08:10.527 99.99000% : 26416.049us 00:08:10.527 99.99900% : 26416.049us 00:08:10.527 99.99990% : 26416.049us 00:08:10.527 99.99999% : 26416.049us 00:08:10.527 00:08:10.527 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:10.527 ================================================================================= 00:08:10.527 1.00000% : 7662.671us 00:08:10.527 10.00000% : 8469.268us 00:08:10.527 25.00000% : 8973.391us 00:08:10.527 50.00000% : 9376.689us 00:08:10.527 75.00000% : 9931.225us 00:08:10.527 90.00000% : 11241.945us 00:08:10.527 95.00000% : 12098.954us 00:08:10.527 98.00000% : 13611.323us 00:08:10.527 99.00000% : 14317.095us 00:08:10.527 99.50000% : 15123.692us 00:08:10.527 99.90000% : 20568.222us 00:08:10.527 99.99000% : 20769.871us 00:08:10.527 99.99900% : 20769.871us 00:08:10.527 99.99990% : 20769.871us 00:08:10.527 99.99999% : 20769.871us 00:08:10.527 00:08:10.527 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:10.527 ============================================================================== 00:08:10.527 Range in us Cumulative IO count 00:08:10.527 6604.012 - 6654.425: 0.0226% ( 3) 00:08:10.527 6654.425 - 6704.837: 0.0830% ( 8) 00:08:10.527 6704.837 - 6755.249: 0.1812% ( 13) 00:08:10.527 6755.249 - 6805.662: 0.2944% ( 15) 00:08:10.527 6805.662 - 6856.074: 0.3472% ( 7) 00:08:10.527 6856.074 - 6906.486: 0.3850% ( 5) 00:08:10.527 6906.486 - 6956.898: 0.4152% ( 4) 00:08:10.527 6956.898 - 7007.311: 0.4454% ( 4) 00:08:10.527 7007.311 - 7057.723: 0.4755% ( 4) 00:08:10.527 7057.723 - 7108.135: 0.4831% ( 1) 00:08:10.527 7511.434 - 7561.846: 0.4906% ( 1) 00:08:10.527 7561.846 - 7612.258: 0.5057% ( 2) 00:08:10.527 7612.258 - 7662.671: 0.5435% ( 5) 00:08:10.527 7662.671 - 7713.083: 0.6643% ( 16) 00:08:10.527 7713.083 - 7763.495: 0.8152% ( 20) 00:08:10.527 7763.495 - 7813.908: 1.1096% ( 39) 00:08:10.527 7813.908 - 7864.320: 1.5097% ( 53) 00:08:10.527 7864.320 - 7914.732: 2.1739% ( 88) 00:08:10.527 7914.732 - 7965.145: 2.8910% ( 95) 00:08:10.527 7965.145 - 8015.557: 3.5628% ( 89) 00:08:10.527 8015.557 - 8065.969: 4.2271% ( 88) 00:08:10.527 8065.969 - 8116.382: 4.8762% ( 86) 00:08:10.527 8116.382 - 8166.794: 5.7594% ( 117) 00:08:10.527 8166.794 - 8217.206: 6.4010% ( 85) 00:08:10.527 8217.206 - 8267.618: 7.2464% ( 112) 00:08:10.527 8267.618 - 8318.031: 7.9710% ( 96) 00:08:10.527 8318.031 - 8368.443: 8.9296% ( 127) 00:08:10.527 8368.443 - 8418.855: 9.7071% ( 103) 00:08:10.527 8418.855 - 8469.268: 10.4997% ( 105) 00:08:10.527 8469.268 - 8519.680: 11.5338% ( 137) 00:08:10.527 8519.680 - 8570.092: 12.6283% ( 145) 00:08:10.527 8570.092 - 8620.505: 13.9342% ( 173) 00:08:10.527 8620.505 - 8670.917: 15.3684% ( 190) 00:08:10.527 8670.917 - 8721.329: 16.5459% ( 156) 00:08:10.527 8721.329 - 8771.742: 18.0329% ( 197) 00:08:10.527 8771.742 - 8822.154: 19.5728% ( 204) 00:08:10.527 8822.154 - 8872.566: 21.8524% ( 302) 00:08:10.527 8872.566 - 8922.978: 23.8527% ( 265) 00:08:10.527 8922.978 - 8973.391: 26.1624% ( 306) 00:08:10.527 8973.391 - 9023.803: 28.6005% ( 323) 00:08:10.527 9023.803 - 9074.215: 31.4840% ( 382) 00:08:10.527 9074.215 - 9124.628: 34.4052% ( 387) 00:08:10.527 9124.628 - 9175.040: 37.8246% ( 453) 00:08:10.527 9175.040 - 9225.452: 40.9722% ( 417) 00:08:10.527 9225.452 - 9275.865: 43.9161% ( 390) 00:08:10.527 9275.865 - 9326.277: 47.4034% ( 462) 00:08:10.527 9326.277 - 9376.689: 51.0643% ( 485) 00:08:10.527 9376.689 - 9427.102: 54.7328% ( 486) 00:08:10.527 9427.102 - 9477.514: 57.4577% ( 361) 00:08:10.527 9477.514 - 9527.926: 60.2657% ( 372) 00:08:10.527 9527.926 - 9578.338: 63.3303% ( 406) 00:08:10.527 9578.338 - 9628.751: 65.6099% ( 302) 00:08:10.527 9628.751 - 9679.163: 67.7234% ( 280) 00:08:10.527 9679.163 - 9729.575: 69.5426% ( 241) 00:08:10.527 9729.575 - 9779.988: 71.3919% ( 245) 00:08:10.527 9779.988 - 9830.400: 72.8034% ( 187) 00:08:10.527 9830.400 - 9880.812: 73.8678% ( 141) 00:08:10.527 9880.812 - 9931.225: 74.8717% ( 133) 00:08:10.527 9931.225 - 9981.637: 76.0945% ( 162) 00:08:10.527 9981.637 - 10032.049: 76.9928% ( 119) 00:08:10.527 10032.049 - 10082.462: 77.7174% ( 96) 00:08:10.527 10082.462 - 10132.874: 78.6534% ( 124) 00:08:10.527 10132.874 - 10183.286: 79.5441% ( 118) 00:08:10.527 10183.286 - 10233.698: 80.2385% ( 92) 00:08:10.527 10233.698 - 10284.111: 81.0613% ( 109) 00:08:10.527 10284.111 - 10334.523: 81.6878% ( 83) 00:08:10.527 10334.523 - 10384.935: 82.4275% ( 98) 00:08:10.527 10384.935 - 10435.348: 83.0842% ( 87) 00:08:10.527 10435.348 - 10485.760: 83.7560% ( 89) 00:08:10.527 10485.760 - 10536.172: 84.5184% ( 101) 00:08:10.527 10536.172 - 10586.585: 85.0921% ( 76) 00:08:10.528 10586.585 - 10636.997: 85.6658% ( 76) 00:08:10.528 10636.997 - 10687.409: 86.0130% ( 46) 00:08:10.528 10687.409 - 10737.822: 86.3451% ( 44) 00:08:10.528 10737.822 - 10788.234: 86.8131% ( 62) 00:08:10.528 10788.234 - 10838.646: 87.2132% ( 53) 00:08:10.528 10838.646 - 10889.058: 87.4925% ( 37) 00:08:10.528 10889.058 - 10939.471: 87.7566% ( 35) 00:08:10.528 10939.471 - 10989.883: 88.1416% ( 51) 00:08:10.528 10989.883 - 11040.295: 88.4964% ( 47) 00:08:10.528 11040.295 - 11090.708: 88.9493% ( 60) 00:08:10.528 11090.708 - 11141.120: 89.2965% ( 46) 00:08:10.528 11141.120 - 11191.532: 89.6060% ( 41) 00:08:10.528 11191.532 - 11241.945: 89.9079% ( 40) 00:08:10.528 11241.945 - 11292.357: 90.1721% ( 35) 00:08:10.528 11292.357 - 11342.769: 90.4514% ( 37) 00:08:10.528 11342.769 - 11393.182: 90.7382% ( 38) 00:08:10.528 11393.182 - 11443.594: 91.1081% ( 49) 00:08:10.528 11443.594 - 11494.006: 91.3572% ( 33) 00:08:10.528 11494.006 - 11544.418: 91.5308% ( 23) 00:08:10.528 11544.418 - 11594.831: 91.7799% ( 33) 00:08:10.528 11594.831 - 11645.243: 92.0818% ( 40) 00:08:10.528 11645.243 - 11695.655: 92.4366% ( 47) 00:08:10.528 11695.655 - 11746.068: 92.7989% ( 48) 00:08:10.528 11746.068 - 11796.480: 93.2896% ( 65) 00:08:10.528 11796.480 - 11846.892: 93.6141% ( 43) 00:08:10.528 11846.892 - 11897.305: 93.9085% ( 39) 00:08:10.528 11897.305 - 11947.717: 94.2029% ( 39) 00:08:10.528 11947.717 - 11998.129: 94.5803% ( 50) 00:08:10.528 11998.129 - 12048.542: 94.9653% ( 51) 00:08:10.528 12048.542 - 12098.954: 95.1993% ( 31) 00:08:10.528 12098.954 - 12149.366: 95.4408% ( 32) 00:08:10.528 12149.366 - 12199.778: 95.5993% ( 21) 00:08:10.528 12199.778 - 12250.191: 95.7729% ( 23) 00:08:10.528 12250.191 - 12300.603: 95.8786% ( 14) 00:08:10.528 12300.603 - 12351.015: 96.0371% ( 21) 00:08:10.528 12351.015 - 12401.428: 96.1655% ( 17) 00:08:10.528 12401.428 - 12451.840: 96.3164% ( 20) 00:08:10.528 12451.840 - 12502.252: 96.4372% ( 16) 00:08:10.528 12502.252 - 12552.665: 96.5655% ( 17) 00:08:10.528 12552.665 - 12603.077: 96.6486% ( 11) 00:08:10.528 12603.077 - 12653.489: 96.7240% ( 10) 00:08:10.528 12653.489 - 12703.902: 96.7618% ( 5) 00:08:10.528 12703.902 - 12754.314: 96.8524% ( 12) 00:08:10.528 12754.314 - 12804.726: 96.9354% ( 11) 00:08:10.528 12804.726 - 12855.138: 97.0033% ( 9) 00:08:10.528 12855.138 - 12905.551: 97.0713% ( 9) 00:08:10.528 12905.551 - 13006.375: 97.2298% ( 21) 00:08:10.528 13006.375 - 13107.200: 97.3128% ( 11) 00:08:10.528 13107.200 - 13208.025: 97.4109% ( 13) 00:08:10.528 13208.025 - 13308.849: 97.5091% ( 13) 00:08:10.528 13308.849 - 13409.674: 97.5996% ( 12) 00:08:10.528 13409.674 - 13510.498: 97.7053% ( 14) 00:08:10.528 13510.498 - 13611.323: 97.8638% ( 21) 00:08:10.528 13611.323 - 13712.148: 98.0148% ( 20) 00:08:10.528 13712.148 - 13812.972: 98.3167% ( 40) 00:08:10.528 13812.972 - 13913.797: 98.4149% ( 13) 00:08:10.528 13913.797 - 14014.622: 98.5205% ( 14) 00:08:10.528 14014.622 - 14115.446: 98.5507% ( 4) 00:08:10.528 14317.095 - 14417.920: 98.5658% ( 2) 00:08:10.528 14417.920 - 14518.745: 98.6187% ( 7) 00:08:10.528 14518.745 - 14619.569: 98.6715% ( 7) 00:08:10.528 14619.569 - 14720.394: 98.7545% ( 11) 00:08:10.528 14720.394 - 14821.218: 98.8904% ( 18) 00:08:10.528 14821.218 - 14922.043: 98.9659% ( 10) 00:08:10.528 14922.043 - 15022.868: 99.0036% ( 5) 00:08:10.528 15022.868 - 15123.692: 99.0338% ( 4) 00:08:10.528 19963.274 - 20064.098: 99.0640% ( 4) 00:08:10.528 20064.098 - 20164.923: 99.1546% ( 12) 00:08:10.528 20164.923 - 20265.748: 99.2301% ( 10) 00:08:10.528 20265.748 - 20366.572: 99.2980% ( 9) 00:08:10.528 20366.572 - 20467.397: 99.3433% ( 6) 00:08:10.528 20467.397 - 20568.222: 99.3810% ( 5) 00:08:10.528 20568.222 - 20669.046: 99.4263% ( 6) 00:08:10.528 20669.046 - 20769.871: 99.4716% ( 6) 00:08:10.528 20769.871 - 20870.695: 99.5169% ( 6) 00:08:10.528 25609.452 - 25710.277: 99.5245% ( 1) 00:08:10.528 25710.277 - 25811.102: 99.5697% ( 6) 00:08:10.528 25811.102 - 26012.751: 99.6830% ( 15) 00:08:10.528 26214.400 - 26416.049: 99.7736% ( 12) 00:08:10.528 26416.049 - 26617.698: 99.8566% ( 11) 00:08:10.528 26617.698 - 26819.348: 99.9396% ( 11) 00:08:10.528 26819.348 - 27020.997: 100.0000% ( 8) 00:08:10.528 00:08:10.528 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:10.528 ============================================================================== 00:08:10.528 Range in us Cumulative IO count 00:08:10.528 5797.415 - 5822.622: 0.0151% ( 2) 00:08:10.528 5822.622 - 5847.828: 0.0377% ( 3) 00:08:10.528 5847.828 - 5873.034: 0.0679% ( 4) 00:08:10.528 5873.034 - 5898.240: 0.1057% ( 5) 00:08:10.528 5898.240 - 5923.446: 0.1208% ( 2) 00:08:10.528 5923.446 - 5948.652: 0.1510% ( 4) 00:08:10.528 5948.652 - 5973.858: 0.1661% ( 2) 00:08:10.528 5973.858 - 5999.065: 0.1812% ( 2) 00:08:10.528 5999.065 - 6024.271: 0.2264% ( 6) 00:08:10.528 6024.271 - 6049.477: 0.2340% ( 1) 00:08:10.528 6049.477 - 6074.683: 0.2491% ( 2) 00:08:10.528 6074.683 - 6099.889: 0.2566% ( 1) 00:08:10.528 6099.889 - 6125.095: 0.2717% ( 2) 00:08:10.528 6125.095 - 6150.302: 0.2793% ( 1) 00:08:10.528 6150.302 - 6175.508: 0.2868% ( 1) 00:08:10.528 6175.508 - 6200.714: 0.3019% ( 2) 00:08:10.528 6200.714 - 6225.920: 0.3246% ( 3) 00:08:10.528 6225.920 - 6251.126: 0.3321% ( 1) 00:08:10.528 6301.538 - 6326.745: 0.3548% ( 3) 00:08:10.528 6326.745 - 6351.951: 0.3699% ( 2) 00:08:10.528 6351.951 - 6377.157: 0.3774% ( 1) 00:08:10.528 6377.157 - 6402.363: 0.3925% ( 2) 00:08:10.528 6402.363 - 6427.569: 0.4076% ( 2) 00:08:10.528 6427.569 - 6452.775: 0.4152% ( 1) 00:08:10.528 6452.775 - 6503.188: 0.4529% ( 5) 00:08:10.528 6503.188 - 6553.600: 0.4831% ( 4) 00:08:10.528 7259.372 - 7309.785: 0.5208% ( 5) 00:08:10.528 7309.785 - 7360.197: 0.5510% ( 4) 00:08:10.528 7360.197 - 7410.609: 0.5963% ( 6) 00:08:10.528 7410.609 - 7461.022: 0.6492% ( 7) 00:08:10.528 7461.022 - 7511.434: 0.6869% ( 5) 00:08:10.528 7511.434 - 7561.846: 0.7322% ( 6) 00:08:10.528 7561.846 - 7612.258: 0.8756% ( 19) 00:08:10.528 7612.258 - 7662.671: 1.0643% ( 25) 00:08:10.528 7662.671 - 7713.083: 1.2832% ( 29) 00:08:10.528 7713.083 - 7763.495: 1.5700% ( 38) 00:08:10.528 7763.495 - 7813.908: 1.8871% ( 42) 00:08:10.528 7813.908 - 7864.320: 2.2343% ( 46) 00:08:10.528 7864.320 - 7914.732: 2.7098% ( 63) 00:08:10.528 7914.732 - 7965.145: 3.3062% ( 79) 00:08:10.528 7965.145 - 8015.557: 3.7968% ( 65) 00:08:10.528 8015.557 - 8065.969: 4.4912% ( 92) 00:08:10.528 8065.969 - 8116.382: 4.9215% ( 57) 00:08:10.528 8116.382 - 8166.794: 5.4574% ( 71) 00:08:10.528 8166.794 - 8217.206: 6.0386% ( 77) 00:08:10.528 8217.206 - 8267.618: 6.7708% ( 97) 00:08:10.528 8267.618 - 8318.031: 7.7597% ( 131) 00:08:10.528 8318.031 - 8368.443: 8.5145% ( 100) 00:08:10.528 8368.443 - 8418.855: 9.5184% ( 133) 00:08:10.528 8418.855 - 8469.268: 10.5374% ( 135) 00:08:10.528 8469.268 - 8519.680: 11.8357% ( 172) 00:08:10.528 8519.680 - 8570.092: 13.3681% ( 203) 00:08:10.528 8570.092 - 8620.505: 14.8702% ( 199) 00:08:10.528 8620.505 - 8670.917: 16.6667% ( 238) 00:08:10.528 8670.917 - 8721.329: 18.5990% ( 256) 00:08:10.528 8721.329 - 8771.742: 20.4484% ( 245) 00:08:10.528 8771.742 - 8822.154: 22.3581% ( 253) 00:08:10.528 8822.154 - 8872.566: 24.3961% ( 270) 00:08:10.528 8872.566 - 8922.978: 26.2908% ( 251) 00:08:10.528 8922.978 - 8973.391: 28.5326% ( 297) 00:08:10.528 8973.391 - 9023.803: 30.8726% ( 310) 00:08:10.528 9023.803 - 9074.215: 33.5220% ( 351) 00:08:10.528 9074.215 - 9124.628: 36.6093% ( 409) 00:08:10.528 9124.628 - 9175.040: 39.3040% ( 357) 00:08:10.528 9175.040 - 9225.452: 41.8176% ( 333) 00:08:10.528 9225.452 - 9275.865: 44.8219% ( 398) 00:08:10.528 9275.865 - 9326.277: 47.6978% ( 381) 00:08:10.528 9326.277 - 9376.689: 50.4001% ( 358) 00:08:10.528 9376.689 - 9427.102: 53.2609% ( 379) 00:08:10.528 9427.102 - 9477.514: 55.9028% ( 350) 00:08:10.528 9477.514 - 9527.926: 58.3484% ( 324) 00:08:10.528 9527.926 - 9578.338: 60.4771% ( 282) 00:08:10.528 9578.338 - 9628.751: 62.6434% ( 287) 00:08:10.528 9628.751 - 9679.163: 64.7569% ( 280) 00:08:10.528 9679.163 - 9729.575: 66.7195% ( 260) 00:08:10.528 9729.575 - 9779.988: 68.5462% ( 242) 00:08:10.528 9779.988 - 9830.400: 70.1011% ( 206) 00:08:10.528 9830.400 - 9880.812: 71.6787% ( 209) 00:08:10.528 9880.812 - 9931.225: 73.3998% ( 228) 00:08:10.528 9931.225 - 9981.637: 74.8037% ( 186) 00:08:10.528 9981.637 - 10032.049: 76.1171% ( 174) 00:08:10.528 10032.049 - 10082.462: 77.2796% ( 154) 00:08:10.528 10082.462 - 10132.874: 78.3514% ( 142) 00:08:10.528 10132.874 - 10183.286: 79.3403% ( 131) 00:08:10.528 10183.286 - 10233.698: 80.2536% ( 121) 00:08:10.528 10233.698 - 10284.111: 81.1217% ( 115) 00:08:10.528 10284.111 - 10334.523: 81.8916% ( 102) 00:08:10.528 10334.523 - 10384.935: 82.5634% ( 89) 00:08:10.528 10384.935 - 10435.348: 83.3484% ( 104) 00:08:10.528 10435.348 - 10485.760: 83.9221% ( 76) 00:08:10.528 10485.760 - 10536.172: 84.5637% ( 85) 00:08:10.528 10536.172 - 10586.585: 85.1374% ( 76) 00:08:10.528 10586.585 - 10636.997: 85.6431% ( 67) 00:08:10.528 10636.997 - 10687.409: 86.0960% ( 60) 00:08:10.528 10687.409 - 10737.822: 86.5187% ( 56) 00:08:10.528 10737.822 - 10788.234: 86.9188% ( 53) 00:08:10.528 10788.234 - 10838.646: 87.3717% ( 60) 00:08:10.528 10838.646 - 10889.058: 87.6812% ( 41) 00:08:10.528 10889.058 - 10939.471: 87.9755% ( 39) 00:08:10.528 10939.471 - 10989.883: 88.2699% ( 39) 00:08:10.528 10989.883 - 11040.295: 88.6473% ( 50) 00:08:10.528 11040.295 - 11090.708: 89.0021% ( 47) 00:08:10.528 11090.708 - 11141.120: 89.3569% ( 47) 00:08:10.529 11141.120 - 11191.532: 89.7041% ( 46) 00:08:10.529 11191.532 - 11241.945: 90.1344% ( 57) 00:08:10.529 11241.945 - 11292.357: 90.4287% ( 39) 00:08:10.529 11292.357 - 11342.769: 90.6250% ( 26) 00:08:10.529 11342.769 - 11393.182: 90.9345% ( 41) 00:08:10.529 11393.182 - 11443.594: 91.1458% ( 28) 00:08:10.529 11443.594 - 11494.006: 91.3194% ( 23) 00:08:10.529 11494.006 - 11544.418: 91.5987% ( 37) 00:08:10.529 11544.418 - 11594.831: 91.9912% ( 52) 00:08:10.529 11594.831 - 11645.243: 92.3536% ( 48) 00:08:10.529 11645.243 - 11695.655: 92.6555% ( 40) 00:08:10.529 11695.655 - 11746.068: 92.9650% ( 41) 00:08:10.529 11746.068 - 11796.480: 93.3424% ( 50) 00:08:10.529 11796.480 - 11846.892: 93.5915% ( 33) 00:08:10.529 11846.892 - 11897.305: 93.8859% ( 39) 00:08:10.529 11897.305 - 11947.717: 94.1199% ( 31) 00:08:10.529 11947.717 - 11998.129: 94.3388% ( 29) 00:08:10.529 11998.129 - 12048.542: 94.5275% ( 25) 00:08:10.529 12048.542 - 12098.954: 94.6935% ( 22) 00:08:10.529 12098.954 - 12149.366: 94.8143% ( 16) 00:08:10.529 12149.366 - 12199.778: 94.9879% ( 23) 00:08:10.529 12199.778 - 12250.191: 95.2974% ( 41) 00:08:10.529 12250.191 - 12300.603: 95.4408% ( 19) 00:08:10.529 12300.603 - 12351.015: 95.5767% ( 18) 00:08:10.529 12351.015 - 12401.428: 95.7126% ( 18) 00:08:10.529 12401.428 - 12451.840: 95.8484% ( 18) 00:08:10.529 12451.840 - 12502.252: 96.0069% ( 21) 00:08:10.529 12502.252 - 12552.665: 96.1504% ( 19) 00:08:10.529 12552.665 - 12603.077: 96.2711% ( 16) 00:08:10.529 12603.077 - 12653.489: 96.3995% ( 17) 00:08:10.529 12653.489 - 12703.902: 96.5806% ( 24) 00:08:10.529 12703.902 - 12754.314: 96.7693% ( 25) 00:08:10.529 12754.314 - 12804.726: 96.8901% ( 16) 00:08:10.529 12804.726 - 12855.138: 96.9882% ( 13) 00:08:10.529 12855.138 - 12905.551: 97.0864% ( 13) 00:08:10.529 12905.551 - 13006.375: 97.1996% ( 15) 00:08:10.529 13006.375 - 13107.200: 97.3656% ( 22) 00:08:10.529 13107.200 - 13208.025: 97.5242% ( 21) 00:08:10.529 13208.025 - 13308.849: 97.6902% ( 22) 00:08:10.529 13308.849 - 13409.674: 97.8185% ( 17) 00:08:10.529 13409.674 - 13510.498: 97.9695% ( 20) 00:08:10.529 13510.498 - 13611.323: 98.1205% ( 20) 00:08:10.529 13611.323 - 13712.148: 98.2261% ( 14) 00:08:10.529 13712.148 - 13812.972: 98.3016% ( 10) 00:08:10.529 13812.972 - 13913.797: 98.3771% ( 10) 00:08:10.529 13913.797 - 14014.622: 98.4450% ( 9) 00:08:10.529 14014.622 - 14115.446: 98.4979% ( 7) 00:08:10.529 14115.446 - 14216.271: 98.5356% ( 5) 00:08:10.529 14216.271 - 14317.095: 98.6941% ( 21) 00:08:10.529 14317.095 - 14417.920: 98.7545% ( 8) 00:08:10.529 14417.920 - 14518.745: 98.7696% ( 2) 00:08:10.529 14518.745 - 14619.569: 98.8074% ( 5) 00:08:10.529 14619.569 - 14720.394: 98.8149% ( 1) 00:08:10.529 14720.394 - 14821.218: 98.8602% ( 6) 00:08:10.529 14821.218 - 14922.043: 98.9055% ( 6) 00:08:10.529 14922.043 - 15022.868: 98.9432% ( 5) 00:08:10.529 15022.868 - 15123.692: 98.9810% ( 5) 00:08:10.529 15123.692 - 15224.517: 99.0263% ( 6) 00:08:10.529 15224.517 - 15325.342: 99.0338% ( 1) 00:08:10.529 19761.625 - 19862.449: 99.0414% ( 1) 00:08:10.529 19862.449 - 19963.274: 99.0791% ( 5) 00:08:10.529 19963.274 - 20064.098: 99.1319% ( 7) 00:08:10.529 20064.098 - 20164.923: 99.1697% ( 5) 00:08:10.529 20164.923 - 20265.748: 99.2150% ( 6) 00:08:10.529 20265.748 - 20366.572: 99.2452% ( 4) 00:08:10.529 20366.572 - 20467.397: 99.2905% ( 6) 00:08:10.529 20467.397 - 20568.222: 99.3357% ( 6) 00:08:10.529 20568.222 - 20669.046: 99.3810% ( 6) 00:08:10.529 20669.046 - 20769.871: 99.4112% ( 4) 00:08:10.529 20769.871 - 20870.695: 99.4716% ( 8) 00:08:10.529 20870.695 - 20971.520: 99.5094% ( 5) 00:08:10.529 20971.520 - 21072.345: 99.5169% ( 1) 00:08:10.529 25306.978 - 25407.803: 99.5320% ( 2) 00:08:10.529 25407.803 - 25508.628: 99.5848% ( 7) 00:08:10.529 25508.628 - 25609.452: 99.6150% ( 4) 00:08:10.529 25609.452 - 25710.277: 99.6679% ( 7) 00:08:10.529 25710.277 - 25811.102: 99.7132% ( 6) 00:08:10.529 25811.102 - 26012.751: 99.7962% ( 11) 00:08:10.529 26012.751 - 26214.400: 99.8943% ( 13) 00:08:10.529 26214.400 - 26416.049: 99.9698% ( 10) 00:08:10.529 26416.049 - 26617.698: 100.0000% ( 4) 00:08:10.529 00:08:10.529 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:10.529 ============================================================================== 00:08:10.529 Range in us Cumulative IO count 00:08:10.529 5696.591 - 5721.797: 0.0151% ( 2) 00:08:10.529 5721.797 - 5747.003: 0.0528% ( 5) 00:08:10.529 5747.003 - 5772.209: 0.0906% ( 5) 00:08:10.529 5772.209 - 5797.415: 0.1283% ( 5) 00:08:10.529 5797.415 - 5822.622: 0.1585% ( 4) 00:08:10.529 5822.622 - 5847.828: 0.1963% ( 5) 00:08:10.529 5847.828 - 5873.034: 0.2491% ( 7) 00:08:10.529 5873.034 - 5898.240: 0.2868% ( 5) 00:08:10.529 5898.240 - 5923.446: 0.3170% ( 4) 00:08:10.529 5923.446 - 5948.652: 0.3548% ( 5) 00:08:10.529 5948.652 - 5973.858: 0.3623% ( 1) 00:08:10.529 5973.858 - 5999.065: 0.3774% ( 2) 00:08:10.529 5999.065 - 6024.271: 0.3925% ( 2) 00:08:10.529 6024.271 - 6049.477: 0.4076% ( 2) 00:08:10.529 6049.477 - 6074.683: 0.4227% ( 2) 00:08:10.529 6074.683 - 6099.889: 0.4378% ( 2) 00:08:10.529 6099.889 - 6125.095: 0.4529% ( 2) 00:08:10.529 6125.095 - 6150.302: 0.4755% ( 3) 00:08:10.529 6150.302 - 6175.508: 0.4831% ( 1) 00:08:10.529 7360.197 - 7410.609: 0.4906% ( 1) 00:08:10.529 7461.022 - 7511.434: 0.4982% ( 1) 00:08:10.529 7511.434 - 7561.846: 0.5359% ( 5) 00:08:10.529 7561.846 - 7612.258: 0.5888% ( 7) 00:08:10.529 7612.258 - 7662.671: 0.6869% ( 13) 00:08:10.529 7662.671 - 7713.083: 0.8605% ( 23) 00:08:10.529 7713.083 - 7763.495: 0.9737% ( 15) 00:08:10.529 7763.495 - 7813.908: 1.1473% ( 23) 00:08:10.529 7813.908 - 7864.320: 1.3059% ( 21) 00:08:10.529 7864.320 - 7914.732: 1.5474% ( 32) 00:08:10.529 7914.732 - 7965.145: 2.0380% ( 65) 00:08:10.529 7965.145 - 8015.557: 2.5966% ( 74) 00:08:10.529 8015.557 - 8065.969: 3.1552% ( 74) 00:08:10.529 8065.969 - 8116.382: 3.8194% ( 88) 00:08:10.529 8116.382 - 8166.794: 4.6422% ( 109) 00:08:10.529 8166.794 - 8217.206: 5.5329% ( 118) 00:08:10.529 8217.206 - 8267.618: 6.1896% ( 87) 00:08:10.529 8267.618 - 8318.031: 6.8086% ( 82) 00:08:10.529 8318.031 - 8368.443: 7.6011% ( 105) 00:08:10.529 8368.443 - 8418.855: 8.3031% ( 93) 00:08:10.529 8418.855 - 8469.268: 9.3373% ( 137) 00:08:10.529 8469.268 - 8519.680: 10.4921% ( 153) 00:08:10.529 8519.680 - 8570.092: 11.9641% ( 195) 00:08:10.529 8570.092 - 8620.505: 13.5568% ( 211) 00:08:10.529 8620.505 - 8670.917: 15.2325% ( 222) 00:08:10.529 8670.917 - 8721.329: 17.1649% ( 256) 00:08:10.529 8721.329 - 8771.742: 19.3992% ( 296) 00:08:10.529 8771.742 - 8822.154: 21.8750% ( 328) 00:08:10.529 8822.154 - 8872.566: 24.1319% ( 299) 00:08:10.529 8872.566 - 8922.978: 26.3285% ( 291) 00:08:10.529 8922.978 - 8973.391: 28.8270% ( 331) 00:08:10.529 8973.391 - 9023.803: 31.3859% ( 339) 00:08:10.529 9023.803 - 9074.215: 33.9070% ( 334) 00:08:10.529 9074.215 - 9124.628: 36.6395% ( 362) 00:08:10.529 9124.628 - 9175.040: 39.0474% ( 319) 00:08:10.529 9175.040 - 9225.452: 41.8252% ( 368) 00:08:10.529 9225.452 - 9275.865: 45.1162% ( 436) 00:08:10.529 9275.865 - 9326.277: 48.6639% ( 470) 00:08:10.529 9326.277 - 9376.689: 51.6531% ( 396) 00:08:10.529 9376.689 - 9427.102: 54.4007% ( 364) 00:08:10.529 9427.102 - 9477.514: 56.9067% ( 332) 00:08:10.529 9477.514 - 9527.926: 59.4429% ( 336) 00:08:10.529 9527.926 - 9578.338: 62.1226% ( 355) 00:08:10.529 9578.338 - 9628.751: 64.1153% ( 264) 00:08:10.529 9628.751 - 9679.163: 66.0930% ( 262) 00:08:10.529 9679.163 - 9729.575: 68.3650% ( 301) 00:08:10.529 9729.575 - 9779.988: 70.2521% ( 250) 00:08:10.529 9779.988 - 9830.400: 71.9505% ( 225) 00:08:10.529 9830.400 - 9880.812: 73.2035% ( 166) 00:08:10.529 9880.812 - 9931.225: 74.1470% ( 125) 00:08:10.529 9931.225 - 9981.637: 75.2340% ( 144) 00:08:10.529 9981.637 - 10032.049: 76.3210% ( 144) 00:08:10.529 10032.049 - 10082.462: 77.3551% ( 137) 00:08:10.529 10082.462 - 10132.874: 78.2609% ( 120) 00:08:10.529 10132.874 - 10183.286: 78.9855% ( 96) 00:08:10.529 10183.286 - 10233.698: 79.8838% ( 119) 00:08:10.529 10233.698 - 10284.111: 80.6084% ( 96) 00:08:10.529 10284.111 - 10334.523: 81.3557% ( 99) 00:08:10.529 10334.523 - 10384.935: 82.1860% ( 110) 00:08:10.529 10384.935 - 10435.348: 83.0540% ( 115) 00:08:10.529 10435.348 - 10485.760: 83.9070% ( 113) 00:08:10.529 10485.760 - 10536.172: 84.6543% ( 99) 00:08:10.529 10536.172 - 10586.585: 85.3487% ( 92) 00:08:10.529 10586.585 - 10636.997: 86.1639% ( 108) 00:08:10.529 10636.997 - 10687.409: 86.7301% ( 75) 00:08:10.529 10687.409 - 10737.822: 87.1830% ( 60) 00:08:10.529 10737.822 - 10788.234: 87.5830% ( 53) 00:08:10.529 10788.234 - 10838.646: 87.9001% ( 42) 00:08:10.529 10838.646 - 10889.058: 88.1944% ( 39) 00:08:10.529 10889.058 - 10939.471: 88.4964% ( 40) 00:08:10.529 10939.471 - 10989.883: 88.8059% ( 41) 00:08:10.529 10989.883 - 11040.295: 89.1757% ( 49) 00:08:10.529 11040.295 - 11090.708: 89.4777% ( 40) 00:08:10.529 11090.708 - 11141.120: 89.7871% ( 41) 00:08:10.529 11141.120 - 11191.532: 90.0815% ( 39) 00:08:10.529 11191.532 - 11241.945: 90.3533% ( 36) 00:08:10.529 11241.945 - 11292.357: 90.6703% ( 42) 00:08:10.529 11292.357 - 11342.769: 90.9420% ( 36) 00:08:10.529 11342.769 - 11393.182: 91.1383% ( 26) 00:08:10.529 11393.182 - 11443.594: 91.4025% ( 35) 00:08:10.529 11443.594 - 11494.006: 91.6591% ( 34) 00:08:10.529 11494.006 - 11544.418: 92.0516% ( 52) 00:08:10.529 11544.418 - 11594.831: 92.4592% ( 54) 00:08:10.529 11594.831 - 11645.243: 92.7763% ( 42) 00:08:10.529 11645.243 - 11695.655: 93.1537% ( 50) 00:08:10.529 11695.655 - 11746.068: 93.5537% ( 53) 00:08:10.529 11746.068 - 11796.480: 93.9161% ( 48) 00:08:10.529 11796.480 - 11846.892: 94.1274% ( 28) 00:08:10.529 11846.892 - 11897.305: 94.3388% ( 28) 00:08:10.530 11897.305 - 11947.717: 94.5501% ( 28) 00:08:10.530 11947.717 - 11998.129: 94.7841% ( 31) 00:08:10.530 11998.129 - 12048.542: 94.9653% ( 24) 00:08:10.530 12048.542 - 12098.954: 95.1011% ( 18) 00:08:10.530 12098.954 - 12149.366: 95.1917% ( 12) 00:08:10.530 12149.366 - 12199.778: 95.2823% ( 12) 00:08:10.530 12199.778 - 12250.191: 95.3880% ( 14) 00:08:10.530 12250.191 - 12300.603: 95.5314% ( 19) 00:08:10.530 12300.603 - 12351.015: 95.6522% ( 16) 00:08:10.530 12351.015 - 12401.428: 95.8031% ( 20) 00:08:10.530 12401.428 - 12451.840: 95.9239% ( 16) 00:08:10.530 12451.840 - 12502.252: 96.0296% ( 14) 00:08:10.530 12502.252 - 12552.665: 96.0975% ( 9) 00:08:10.530 12552.665 - 12603.077: 96.1730% ( 10) 00:08:10.530 12603.077 - 12653.489: 96.2560% ( 11) 00:08:10.530 12653.489 - 12703.902: 96.3768% ( 16) 00:08:10.530 12703.902 - 12754.314: 96.4674% ( 12) 00:08:10.530 12754.314 - 12804.726: 96.5655% ( 13) 00:08:10.530 12804.726 - 12855.138: 96.6787% ( 15) 00:08:10.530 12855.138 - 12905.551: 96.8146% ( 18) 00:08:10.530 12905.551 - 13006.375: 97.0864% ( 36) 00:08:10.530 13006.375 - 13107.200: 97.5091% ( 56) 00:08:10.530 13107.200 - 13208.025: 97.7431% ( 31) 00:08:10.530 13208.025 - 13308.849: 97.9393% ( 26) 00:08:10.530 13308.849 - 13409.674: 98.1205% ( 24) 00:08:10.530 13409.674 - 13510.498: 98.2563% ( 18) 00:08:10.530 13510.498 - 13611.323: 98.3243% ( 9) 00:08:10.530 13611.323 - 13712.148: 98.3922% ( 9) 00:08:10.530 13712.148 - 13812.972: 98.4375% ( 6) 00:08:10.530 13812.972 - 13913.797: 98.4828% ( 6) 00:08:10.530 13913.797 - 14014.622: 98.5356% ( 7) 00:08:10.530 14014.622 - 14115.446: 98.5507% ( 2) 00:08:10.530 14216.271 - 14317.095: 98.5583% ( 1) 00:08:10.530 14317.095 - 14417.920: 98.6036% ( 6) 00:08:10.530 14417.920 - 14518.745: 98.6564% ( 7) 00:08:10.530 14518.745 - 14619.569: 98.6941% ( 5) 00:08:10.530 14619.569 - 14720.394: 98.7470% ( 7) 00:08:10.530 14720.394 - 14821.218: 98.7923% ( 6) 00:08:10.530 14821.218 - 14922.043: 98.8376% ( 6) 00:08:10.530 14922.043 - 15022.868: 98.8904% ( 7) 00:08:10.530 15022.868 - 15123.692: 98.9357% ( 6) 00:08:10.530 15123.692 - 15224.517: 98.9810% ( 6) 00:08:10.530 15224.517 - 15325.342: 99.0263% ( 6) 00:08:10.530 15325.342 - 15426.166: 99.0338% ( 1) 00:08:10.530 19559.975 - 19660.800: 99.0414% ( 1) 00:08:10.530 19660.800 - 19761.625: 99.0867% ( 6) 00:08:10.530 19761.625 - 19862.449: 99.1395% ( 7) 00:08:10.530 19862.449 - 19963.274: 99.1923% ( 7) 00:08:10.530 19963.274 - 20064.098: 99.2376% ( 6) 00:08:10.530 20064.098 - 20164.923: 99.2905% ( 7) 00:08:10.530 20164.923 - 20265.748: 99.3433% ( 7) 00:08:10.530 20265.748 - 20366.572: 99.3886% ( 6) 00:08:10.530 20366.572 - 20467.397: 99.4414% ( 7) 00:08:10.530 20467.397 - 20568.222: 99.4867% ( 6) 00:08:10.530 20568.222 - 20669.046: 99.5169% ( 4) 00:08:10.530 24903.680 - 25004.505: 99.5320% ( 2) 00:08:10.530 25004.505 - 25105.329: 99.5773% ( 6) 00:08:10.530 25105.329 - 25206.154: 99.6301% ( 7) 00:08:10.530 25206.154 - 25306.978: 99.6754% ( 6) 00:08:10.530 25306.978 - 25407.803: 99.7207% ( 6) 00:08:10.530 25407.803 - 25508.628: 99.7736% ( 7) 00:08:10.530 25508.628 - 25609.452: 99.8264% ( 7) 00:08:10.530 25609.452 - 25710.277: 99.8792% ( 7) 00:08:10.530 25710.277 - 25811.102: 99.9321% ( 7) 00:08:10.530 25811.102 - 26012.751: 100.0000% ( 9) 00:08:10.530 00:08:10.530 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:10.530 ============================================================================== 00:08:10.530 Range in us Cumulative IO count 00:08:10.530 4537.108 - 4562.314: 0.0075% ( 1) 00:08:10.530 4562.314 - 4587.520: 0.0151% ( 1) 00:08:10.530 4587.520 - 4612.726: 0.0302% ( 2) 00:08:10.530 4637.932 - 4663.138: 0.0453% ( 2) 00:08:10.530 4663.138 - 4688.345: 0.0679% ( 3) 00:08:10.530 4688.345 - 4713.551: 0.0906% ( 3) 00:08:10.530 4713.551 - 4738.757: 0.1434% ( 7) 00:08:10.530 4738.757 - 4763.963: 0.1812% ( 5) 00:08:10.530 4763.963 - 4789.169: 0.2868% ( 14) 00:08:10.530 4789.169 - 4814.375: 0.3246% ( 5) 00:08:10.530 4814.375 - 4839.582: 0.3548% ( 4) 00:08:10.530 4839.582 - 4864.788: 0.3925% ( 5) 00:08:10.530 4864.788 - 4889.994: 0.4227% ( 4) 00:08:10.530 4889.994 - 4915.200: 0.4454% ( 3) 00:08:10.530 4915.200 - 4940.406: 0.4529% ( 1) 00:08:10.530 4940.406 - 4965.612: 0.4755% ( 3) 00:08:10.530 4965.612 - 4990.818: 0.4831% ( 1) 00:08:10.530 7410.609 - 7461.022: 0.5057% ( 3) 00:08:10.530 7461.022 - 7511.434: 0.5737% ( 9) 00:08:10.530 7511.434 - 7561.846: 0.7322% ( 21) 00:08:10.530 7561.846 - 7612.258: 0.8907% ( 21) 00:08:10.530 7612.258 - 7662.671: 0.9662% ( 10) 00:08:10.530 7662.671 - 7713.083: 1.1021% ( 18) 00:08:10.530 7713.083 - 7763.495: 1.4644% ( 48) 00:08:10.530 7763.495 - 7813.908: 1.7210% ( 34) 00:08:10.530 7813.908 - 7864.320: 2.0305% ( 41) 00:08:10.530 7864.320 - 7914.732: 2.4004% ( 49) 00:08:10.530 7914.732 - 7965.145: 3.1929% ( 105) 00:08:10.530 7965.145 - 8015.557: 3.7968% ( 80) 00:08:10.530 8015.557 - 8065.969: 4.5365% ( 98) 00:08:10.530 8065.969 - 8116.382: 5.5405% ( 133) 00:08:10.530 8116.382 - 8166.794: 6.4161% ( 116) 00:08:10.530 8166.794 - 8217.206: 7.0501% ( 84) 00:08:10.530 8217.206 - 8267.618: 7.6766% ( 83) 00:08:10.530 8267.618 - 8318.031: 8.4541% ( 103) 00:08:10.530 8318.031 - 8368.443: 9.2769% ( 109) 00:08:10.530 8368.443 - 8418.855: 10.1072% ( 110) 00:08:10.530 8418.855 - 8469.268: 10.7563% ( 86) 00:08:10.530 8469.268 - 8519.680: 11.5338% ( 103) 00:08:10.530 8519.680 - 8570.092: 12.7340% ( 159) 00:08:10.530 8570.092 - 8620.505: 14.2739% ( 204) 00:08:10.530 8620.505 - 8670.917: 15.6024% ( 176) 00:08:10.530 8670.917 - 8721.329: 17.1573% ( 206) 00:08:10.530 8721.329 - 8771.742: 18.7651% ( 213) 00:08:10.530 8771.742 - 8822.154: 21.0749% ( 306) 00:08:10.530 8822.154 - 8872.566: 23.3545% ( 302) 00:08:10.530 8872.566 - 8922.978: 25.5208% ( 287) 00:08:10.530 8922.978 - 8973.391: 28.0344% ( 333) 00:08:10.530 8973.391 - 9023.803: 31.3179% ( 435) 00:08:10.530 9023.803 - 9074.215: 34.1033% ( 369) 00:08:10.530 9074.215 - 9124.628: 36.6168% ( 333) 00:08:10.530 9124.628 - 9175.040: 39.7569% ( 416) 00:08:10.530 9175.040 - 9225.452: 42.6479% ( 383) 00:08:10.530 9225.452 - 9275.865: 46.0069% ( 445) 00:08:10.530 9275.865 - 9326.277: 48.9734% ( 393) 00:08:10.530 9326.277 - 9376.689: 51.9173% ( 390) 00:08:10.530 9376.689 - 9427.102: 55.1404% ( 427) 00:08:10.530 9427.102 - 9477.514: 58.0012% ( 379) 00:08:10.530 9477.514 - 9527.926: 60.6431% ( 350) 00:08:10.530 9527.926 - 9578.338: 62.6434% ( 265) 00:08:10.530 9578.338 - 9628.751: 64.7343% ( 277) 00:08:10.530 9628.751 - 9679.163: 66.4327% ( 225) 00:08:10.530 9679.163 - 9729.575: 68.0933% ( 220) 00:08:10.530 9729.575 - 9779.988: 69.6256% ( 203) 00:08:10.530 9779.988 - 9830.400: 71.1353% ( 200) 00:08:10.530 9830.400 - 9880.812: 72.0411% ( 120) 00:08:10.530 9880.812 - 9931.225: 73.0148% ( 129) 00:08:10.530 9931.225 - 9981.637: 73.8979% ( 117) 00:08:10.530 9981.637 - 10032.049: 75.2415% ( 178) 00:08:10.530 10032.049 - 10082.462: 76.2530% ( 134) 00:08:10.530 10082.462 - 10132.874: 77.0607% ( 107) 00:08:10.530 10132.874 - 10183.286: 78.0118% ( 126) 00:08:10.530 10183.286 - 10233.698: 78.9855% ( 129) 00:08:10.530 10233.698 - 10284.111: 79.9290% ( 125) 00:08:10.530 10284.111 - 10334.523: 80.7518% ( 109) 00:08:10.530 10334.523 - 10384.935: 81.3632% ( 81) 00:08:10.530 10384.935 - 10435.348: 82.1407% ( 103) 00:08:10.530 10435.348 - 10485.760: 82.8955% ( 100) 00:08:10.530 10485.760 - 10536.172: 83.6353% ( 98) 00:08:10.530 10536.172 - 10586.585: 84.2920% ( 87) 00:08:10.530 10586.585 - 10636.997: 85.1449% ( 113) 00:08:10.530 10636.997 - 10687.409: 86.0281% ( 117) 00:08:10.530 10687.409 - 10737.822: 86.7452% ( 95) 00:08:10.530 10737.822 - 10788.234: 87.4472% ( 93) 00:08:10.530 10788.234 - 10838.646: 87.9604% ( 68) 00:08:10.530 10838.646 - 10889.058: 88.4662% ( 67) 00:08:10.530 10889.058 - 10939.471: 89.0625% ( 79) 00:08:10.530 10939.471 - 10989.883: 89.5003% ( 58) 00:08:10.530 10989.883 - 11040.295: 89.7947% ( 39) 00:08:10.530 11040.295 - 11090.708: 90.1268% ( 44) 00:08:10.530 11090.708 - 11141.120: 90.4816% ( 47) 00:08:10.530 11141.120 - 11191.532: 90.7986% ( 42) 00:08:10.530 11191.532 - 11241.945: 91.1156% ( 42) 00:08:10.530 11241.945 - 11292.357: 91.3421% ( 30) 00:08:10.530 11292.357 - 11342.769: 91.5459% ( 27) 00:08:10.530 11342.769 - 11393.182: 91.7421% ( 26) 00:08:10.530 11393.182 - 11443.594: 91.9460% ( 27) 00:08:10.530 11443.594 - 11494.006: 92.1196% ( 23) 00:08:10.530 11494.006 - 11544.418: 92.2479% ( 17) 00:08:10.530 11544.418 - 11594.831: 92.3913% ( 19) 00:08:10.530 11594.831 - 11645.243: 92.5272% ( 18) 00:08:10.530 11645.243 - 11695.655: 92.6555% ( 17) 00:08:10.530 11695.655 - 11746.068: 92.7989% ( 19) 00:08:10.530 11746.068 - 11796.480: 93.1839% ( 51) 00:08:10.530 11796.480 - 11846.892: 93.4103% ( 30) 00:08:10.530 11846.892 - 11897.305: 93.6368% ( 30) 00:08:10.530 11897.305 - 11947.717: 93.8557% ( 29) 00:08:10.530 11947.717 - 11998.129: 94.0972% ( 32) 00:08:10.530 11998.129 - 12048.542: 94.3161% ( 29) 00:08:10.530 12048.542 - 12098.954: 94.7011% ( 51) 00:08:10.530 12098.954 - 12149.366: 94.8521% ( 20) 00:08:10.530 12149.366 - 12199.778: 94.9955% ( 19) 00:08:10.530 12199.778 - 12250.191: 95.2295% ( 31) 00:08:10.530 12250.191 - 12300.603: 95.3653% ( 18) 00:08:10.530 12300.603 - 12351.015: 95.5239% ( 21) 00:08:10.530 12351.015 - 12401.428: 95.6748% ( 20) 00:08:10.530 12401.428 - 12451.840: 95.8031% ( 17) 00:08:10.530 12451.840 - 12502.252: 95.9466% ( 19) 00:08:10.530 12502.252 - 12552.665: 96.0975% ( 20) 00:08:10.530 12552.665 - 12603.077: 96.2787% ( 24) 00:08:10.530 12603.077 - 12653.489: 96.4146% ( 18) 00:08:10.530 12653.489 - 12703.902: 96.4976% ( 11) 00:08:10.530 12703.902 - 12754.314: 96.5806% ( 11) 00:08:10.531 12754.314 - 12804.726: 96.6712% ( 12) 00:08:10.531 12804.726 - 12855.138: 96.7920% ( 16) 00:08:10.531 12855.138 - 12905.551: 96.9127% ( 16) 00:08:10.531 12905.551 - 13006.375: 97.0788% ( 22) 00:08:10.531 13006.375 - 13107.200: 97.3430% ( 35) 00:08:10.531 13107.200 - 13208.025: 97.5242% ( 24) 00:08:10.531 13208.025 - 13308.849: 97.6374% ( 15) 00:08:10.531 13308.849 - 13409.674: 97.7506% ( 15) 00:08:10.531 13409.674 - 13510.498: 97.8412% ( 12) 00:08:10.531 13510.498 - 13611.323: 98.0223% ( 24) 00:08:10.531 13611.323 - 13712.148: 98.2035% ( 24) 00:08:10.531 13712.148 - 13812.972: 98.4752% ( 36) 00:08:10.531 13812.972 - 13913.797: 98.6111% ( 18) 00:08:10.531 13913.797 - 14014.622: 98.7017% ( 12) 00:08:10.531 14014.622 - 14115.446: 98.7847% ( 11) 00:08:10.531 14115.446 - 14216.271: 98.8074% ( 3) 00:08:10.531 14216.271 - 14317.095: 98.8527% ( 6) 00:08:10.531 14317.095 - 14417.920: 98.9055% ( 7) 00:08:10.531 14417.920 - 14518.745: 98.9508% ( 6) 00:08:10.531 14518.745 - 14619.569: 99.0036% ( 7) 00:08:10.531 14619.569 - 14720.394: 99.0338% ( 4) 00:08:10.531 19862.449 - 19963.274: 99.0716% ( 5) 00:08:10.531 19963.274 - 20064.098: 99.1168% ( 6) 00:08:10.531 20064.098 - 20164.923: 99.1697% ( 7) 00:08:10.531 20164.923 - 20265.748: 99.2225% ( 7) 00:08:10.531 20265.748 - 20366.572: 99.2754% ( 7) 00:08:10.531 20366.572 - 20467.397: 99.3207% ( 6) 00:08:10.531 20467.397 - 20568.222: 99.3735% ( 7) 00:08:10.531 20568.222 - 20669.046: 99.4263% ( 7) 00:08:10.531 20669.046 - 20769.871: 99.4792% ( 7) 00:08:10.531 20769.871 - 20870.695: 99.5169% ( 5) 00:08:10.531 25306.978 - 25407.803: 99.5320% ( 2) 00:08:10.531 25407.803 - 25508.628: 99.5697% ( 5) 00:08:10.531 25508.628 - 25609.452: 99.6226% ( 7) 00:08:10.531 25609.452 - 25710.277: 99.6754% ( 7) 00:08:10.531 25710.277 - 25811.102: 99.7207% ( 6) 00:08:10.531 25811.102 - 26012.751: 99.8264% ( 14) 00:08:10.531 26012.751 - 26214.400: 99.9170% ( 12) 00:08:10.531 26214.400 - 26416.049: 100.0000% ( 11) 00:08:10.531 00:08:10.531 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:10.531 ============================================================================== 00:08:10.531 Range in us Cumulative IO count 00:08:10.531 4335.458 - 4360.665: 0.0075% ( 1) 00:08:10.531 4360.665 - 4385.871: 0.0302% ( 3) 00:08:10.531 4385.871 - 4411.077: 0.0528% ( 3) 00:08:10.531 4411.077 - 4436.283: 0.1283% ( 10) 00:08:10.531 4436.283 - 4461.489: 0.2415% ( 15) 00:08:10.531 4461.489 - 4486.695: 0.3019% ( 8) 00:08:10.531 4486.695 - 4511.902: 0.3321% ( 4) 00:08:10.531 4511.902 - 4537.108: 0.3397% ( 1) 00:08:10.531 4537.108 - 4562.314: 0.3548% ( 2) 00:08:10.531 4562.314 - 4587.520: 0.3699% ( 2) 00:08:10.531 4587.520 - 4612.726: 0.3850% ( 2) 00:08:10.531 4612.726 - 4637.932: 0.4001% ( 2) 00:08:10.531 4637.932 - 4663.138: 0.4152% ( 2) 00:08:10.531 4663.138 - 4688.345: 0.4303% ( 2) 00:08:10.531 4688.345 - 4713.551: 0.4454% ( 2) 00:08:10.531 4713.551 - 4738.757: 0.4604% ( 2) 00:08:10.531 4738.757 - 4763.963: 0.4680% ( 1) 00:08:10.531 4763.963 - 4789.169: 0.4831% ( 2) 00:08:10.531 7158.548 - 7208.960: 0.5057% ( 3) 00:08:10.531 7208.960 - 7259.372: 0.5510% ( 6) 00:08:10.531 7259.372 - 7309.785: 0.6190% ( 9) 00:08:10.531 7309.785 - 7360.197: 0.7548% ( 18) 00:08:10.531 7360.197 - 7410.609: 0.8379% ( 11) 00:08:10.531 7410.609 - 7461.022: 0.8756% ( 5) 00:08:10.531 7461.022 - 7511.434: 0.9058% ( 4) 00:08:10.531 7511.434 - 7561.846: 0.9511% ( 6) 00:08:10.531 7561.846 - 7612.258: 1.0794% ( 17) 00:08:10.531 7612.258 - 7662.671: 1.1549% ( 10) 00:08:10.531 7662.671 - 7713.083: 1.2681% ( 15) 00:08:10.531 7713.083 - 7763.495: 1.4644% ( 26) 00:08:10.531 7763.495 - 7813.908: 1.8720% ( 54) 00:08:10.531 7813.908 - 7864.320: 2.1890% ( 42) 00:08:10.531 7864.320 - 7914.732: 2.4909% ( 40) 00:08:10.531 7914.732 - 7965.145: 3.0948% ( 80) 00:08:10.531 7965.145 - 8015.557: 3.5100% ( 55) 00:08:10.531 8015.557 - 8065.969: 4.2271% ( 95) 00:08:10.531 8065.969 - 8116.382: 4.7705% ( 72) 00:08:10.531 8116.382 - 8166.794: 5.3744% ( 80) 00:08:10.531 8166.794 - 8217.206: 5.9707% ( 79) 00:08:10.531 8217.206 - 8267.618: 6.7029% ( 97) 00:08:10.531 8267.618 - 8318.031: 7.4728% ( 102) 00:08:10.531 8318.031 - 8368.443: 8.2579% ( 104) 00:08:10.531 8368.443 - 8418.855: 9.1712% ( 121) 00:08:10.531 8418.855 - 8469.268: 10.0091% ( 111) 00:08:10.531 8469.268 - 8519.680: 11.2545% ( 165) 00:08:10.531 8519.680 - 8570.092: 12.4774% ( 162) 00:08:10.531 8570.092 - 8620.505: 13.9946% ( 201) 00:08:10.531 8620.505 - 8670.917: 15.7760% ( 236) 00:08:10.531 8670.917 - 8721.329: 17.5876% ( 240) 00:08:10.531 8721.329 - 8771.742: 19.2935% ( 226) 00:08:10.531 8771.742 - 8822.154: 21.4749% ( 289) 00:08:10.531 8822.154 - 8872.566: 23.7545% ( 302) 00:08:10.531 8872.566 - 8922.978: 25.4529% ( 225) 00:08:10.531 8922.978 - 8973.391: 27.2268% ( 235) 00:08:10.531 8973.391 - 9023.803: 29.6950% ( 327) 00:08:10.531 9023.803 - 9074.215: 32.7748% ( 408) 00:08:10.531 9074.215 - 9124.628: 35.2657% ( 330) 00:08:10.531 9124.628 - 9175.040: 37.8850% ( 347) 00:08:10.531 9175.040 - 9225.452: 41.2515% ( 446) 00:08:10.531 9225.452 - 9275.865: 44.7313% ( 461) 00:08:10.531 9275.865 - 9326.277: 47.8034% ( 407) 00:08:10.531 9326.277 - 9376.689: 51.2832% ( 461) 00:08:10.531 9376.689 - 9427.102: 54.2195% ( 389) 00:08:10.531 9427.102 - 9477.514: 57.7974% ( 474) 00:08:10.531 9477.514 - 9527.926: 60.8092% ( 399) 00:08:10.531 9527.926 - 9578.338: 63.3832% ( 341) 00:08:10.531 9578.338 - 9628.751: 65.7080% ( 308) 00:08:10.531 9628.751 - 9679.163: 67.7612% ( 272) 00:08:10.531 9679.163 - 9729.575: 69.7237% ( 260) 00:08:10.531 9729.575 - 9779.988: 71.2636% ( 204) 00:08:10.531 9779.988 - 9830.400: 72.5694% ( 173) 00:08:10.531 9830.400 - 9880.812: 73.6111% ( 138) 00:08:10.531 9880.812 - 9931.225: 74.4641% ( 113) 00:08:10.531 9931.225 - 9981.637: 75.4001% ( 124) 00:08:10.531 9981.637 - 10032.049: 76.4493% ( 139) 00:08:10.531 10032.049 - 10082.462: 77.5362% ( 144) 00:08:10.531 10082.462 - 10132.874: 78.2760% ( 98) 00:08:10.531 10132.874 - 10183.286: 79.2648% ( 131) 00:08:10.531 10183.286 - 10233.698: 80.1404% ( 116) 00:08:10.531 10233.698 - 10284.111: 81.1368% ( 132) 00:08:10.531 10284.111 - 10334.523: 81.9671% ( 110) 00:08:10.531 10334.523 - 10384.935: 82.8125% ( 112) 00:08:10.531 10384.935 - 10435.348: 83.4013% ( 78) 00:08:10.531 10435.348 - 10485.760: 83.9749% ( 76) 00:08:10.531 10485.760 - 10536.172: 84.5411% ( 75) 00:08:10.531 10536.172 - 10586.585: 85.1223% ( 77) 00:08:10.531 10586.585 - 10636.997: 85.7111% ( 78) 00:08:10.531 10636.997 - 10687.409: 86.3829% ( 89) 00:08:10.531 10687.409 - 10737.822: 86.8961% ( 68) 00:08:10.531 10737.822 - 10788.234: 87.4321% ( 71) 00:08:10.531 10788.234 - 10838.646: 88.0963% ( 88) 00:08:10.531 10838.646 - 10889.058: 88.6171% ( 69) 00:08:10.531 10889.058 - 10939.471: 88.9493% ( 44) 00:08:10.531 10939.471 - 10989.883: 89.2588% ( 41) 00:08:10.531 10989.883 - 11040.295: 89.7343% ( 63) 00:08:10.531 11040.295 - 11090.708: 90.0740% ( 45) 00:08:10.531 11090.708 - 11141.120: 90.4514% ( 50) 00:08:10.531 11141.120 - 11191.532: 90.7382% ( 38) 00:08:10.531 11191.532 - 11241.945: 91.0553% ( 42) 00:08:10.531 11241.945 - 11292.357: 91.2742% ( 29) 00:08:10.531 11292.357 - 11342.769: 91.3723% ( 13) 00:08:10.531 11342.769 - 11393.182: 91.5006% ( 17) 00:08:10.531 11393.182 - 11443.594: 91.6818% ( 24) 00:08:10.531 11443.594 - 11494.006: 91.9082% ( 30) 00:08:10.531 11494.006 - 11544.418: 92.1271% ( 29) 00:08:10.531 11544.418 - 11594.831: 92.3536% ( 30) 00:08:10.531 11594.831 - 11645.243: 92.5800% ( 30) 00:08:10.531 11645.243 - 11695.655: 92.8140% ( 31) 00:08:10.531 11695.655 - 11746.068: 93.0782% ( 35) 00:08:10.531 11746.068 - 11796.480: 93.3424% ( 35) 00:08:10.531 11796.480 - 11846.892: 93.5688% ( 30) 00:08:10.531 11846.892 - 11897.305: 93.7425% ( 23) 00:08:10.531 11897.305 - 11947.717: 93.9614% ( 29) 00:08:10.531 11947.717 - 11998.129: 94.2255% ( 35) 00:08:10.531 11998.129 - 12048.542: 94.4520% ( 30) 00:08:10.531 12048.542 - 12098.954: 94.6332% ( 24) 00:08:10.531 12098.954 - 12149.366: 94.8898% ( 34) 00:08:10.532 12149.366 - 12199.778: 95.1993% ( 41) 00:08:10.532 12199.778 - 12250.191: 95.3578% ( 21) 00:08:10.532 12250.191 - 12300.603: 95.4937% ( 18) 00:08:10.532 12300.603 - 12351.015: 95.7050% ( 28) 00:08:10.532 12351.015 - 12401.428: 95.8258% ( 16) 00:08:10.532 12401.428 - 12451.840: 95.9617% ( 18) 00:08:10.532 12451.840 - 12502.252: 96.0749% ( 15) 00:08:10.532 12502.252 - 12552.665: 96.2107% ( 18) 00:08:10.532 12552.665 - 12603.077: 96.3391% ( 17) 00:08:10.532 12603.077 - 12653.489: 96.4372% ( 13) 00:08:10.532 12653.489 - 12703.902: 96.5504% ( 15) 00:08:10.532 12703.902 - 12754.314: 96.6335% ( 11) 00:08:10.532 12754.314 - 12804.726: 96.7467% ( 15) 00:08:10.532 12804.726 - 12855.138: 96.8146% ( 9) 00:08:10.532 12855.138 - 12905.551: 96.8599% ( 6) 00:08:10.532 12905.551 - 13006.375: 96.9958% ( 18) 00:08:10.532 13006.375 - 13107.200: 97.1543% ( 21) 00:08:10.532 13107.200 - 13208.025: 97.2826% ( 17) 00:08:10.532 13208.025 - 13308.849: 97.4336% ( 20) 00:08:10.532 13308.849 - 13409.674: 97.5619% ( 17) 00:08:10.532 13409.674 - 13510.498: 97.7053% ( 19) 00:08:10.532 13510.498 - 13611.323: 97.8185% ( 15) 00:08:10.532 13611.323 - 13712.148: 97.9620% ( 19) 00:08:10.532 13712.148 - 13812.972: 98.2639% ( 40) 00:08:10.532 13812.972 - 13913.797: 98.3922% ( 17) 00:08:10.532 13913.797 - 14014.622: 98.5885% ( 26) 00:08:10.532 14014.622 - 14115.446: 98.6790% ( 12) 00:08:10.532 14115.446 - 14216.271: 98.7545% ( 10) 00:08:10.532 14216.271 - 14317.095: 98.8451% ( 12) 00:08:10.532 14317.095 - 14417.920: 98.9432% ( 13) 00:08:10.532 14417.920 - 14518.745: 98.9885% ( 6) 00:08:10.532 14518.745 - 14619.569: 99.0187% ( 4) 00:08:10.532 14619.569 - 14720.394: 99.0338% ( 2) 00:08:10.532 19459.151 - 19559.975: 99.0640% ( 4) 00:08:10.532 19559.975 - 19660.800: 99.1093% ( 6) 00:08:10.532 19660.800 - 19761.625: 99.1621% ( 7) 00:08:10.532 19761.625 - 19862.449: 99.2150% ( 7) 00:08:10.532 19862.449 - 19963.274: 99.2678% ( 7) 00:08:10.532 19963.274 - 20064.098: 99.3131% ( 6) 00:08:10.532 20064.098 - 20164.923: 99.3659% ( 7) 00:08:10.532 20164.923 - 20265.748: 99.4188% ( 7) 00:08:10.532 20265.748 - 20366.572: 99.4641% ( 6) 00:08:10.532 20366.572 - 20467.397: 99.5169% ( 7) 00:08:10.532 25306.978 - 25407.803: 99.5622% ( 6) 00:08:10.532 25407.803 - 25508.628: 99.6150% ( 7) 00:08:10.532 25508.628 - 25609.452: 99.6603% ( 6) 00:08:10.532 25609.452 - 25710.277: 99.7132% ( 7) 00:08:10.532 25710.277 - 25811.102: 99.7585% ( 6) 00:08:10.532 25811.102 - 26012.751: 99.8641% ( 14) 00:08:10.532 26012.751 - 26214.400: 99.9623% ( 13) 00:08:10.532 26214.400 - 26416.049: 100.0000% ( 5) 00:08:10.532 00:08:10.532 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:10.532 ============================================================================== 00:08:10.532 Range in us Cumulative IO count 00:08:10.532 4007.778 - 4032.985: 0.0075% ( 1) 00:08:10.532 4159.015 - 4184.222: 0.0300% ( 3) 00:08:10.532 4184.222 - 4209.428: 0.1202% ( 12) 00:08:10.532 4209.428 - 4234.634: 0.1953% ( 10) 00:08:10.532 4234.634 - 4259.840: 0.2404% ( 6) 00:08:10.532 4259.840 - 4285.046: 0.2930% ( 7) 00:08:10.532 4285.046 - 4310.252: 0.3155% ( 3) 00:08:10.532 4310.252 - 4335.458: 0.3380% ( 3) 00:08:10.532 4335.458 - 4360.665: 0.3531% ( 2) 00:08:10.532 4360.665 - 4385.871: 0.3681% ( 2) 00:08:10.532 4385.871 - 4411.077: 0.3831% ( 2) 00:08:10.532 4411.077 - 4436.283: 0.3981% ( 2) 00:08:10.532 4436.283 - 4461.489: 0.4207% ( 3) 00:08:10.532 4461.489 - 4486.695: 0.4357% ( 2) 00:08:10.532 4486.695 - 4511.902: 0.4507% ( 2) 00:08:10.532 4511.902 - 4537.108: 0.4733% ( 3) 00:08:10.532 4537.108 - 4562.314: 0.4808% ( 1) 00:08:10.532 6704.837 - 6755.249: 0.5183% ( 5) 00:08:10.532 6755.249 - 6805.662: 0.5484% ( 4) 00:08:10.532 6805.662 - 6856.074: 0.6085% ( 8) 00:08:10.532 6856.074 - 6906.486: 0.6686% ( 8) 00:08:10.532 6906.486 - 6956.898: 0.7061% ( 5) 00:08:10.532 6956.898 - 7007.311: 0.7362% ( 4) 00:08:10.532 7007.311 - 7057.723: 0.7662% ( 4) 00:08:10.532 7057.723 - 7108.135: 0.7888% ( 3) 00:08:10.532 7108.135 - 7158.548: 0.8188% ( 4) 00:08:10.532 7158.548 - 7208.960: 0.8564% ( 5) 00:08:10.532 7208.960 - 7259.372: 0.8864% ( 4) 00:08:10.532 7259.372 - 7309.785: 0.9090% ( 3) 00:08:10.532 7309.785 - 7360.197: 0.9465% ( 5) 00:08:10.532 7360.197 - 7410.609: 0.9615% ( 2) 00:08:10.532 7511.434 - 7561.846: 0.9766% ( 2) 00:08:10.532 7561.846 - 7612.258: 0.9916% ( 2) 00:08:10.532 7612.258 - 7662.671: 1.0291% ( 5) 00:08:10.532 7662.671 - 7713.083: 1.1719% ( 19) 00:08:10.532 7713.083 - 7763.495: 1.3447% ( 23) 00:08:10.532 7763.495 - 7813.908: 1.7127% ( 49) 00:08:10.532 7813.908 - 7864.320: 2.1710% ( 61) 00:08:10.532 7864.320 - 7914.732: 2.7043% ( 71) 00:08:10.532 7914.732 - 7965.145: 3.2227% ( 69) 00:08:10.532 7965.145 - 8015.557: 3.9213% ( 93) 00:08:10.532 8015.557 - 8065.969: 4.5673% ( 86) 00:08:10.532 8065.969 - 8116.382: 5.1382% ( 76) 00:08:10.532 8116.382 - 8166.794: 5.7692% ( 84) 00:08:10.532 8166.794 - 8217.206: 6.4904% ( 96) 00:08:10.532 8217.206 - 8267.618: 7.1514% ( 88) 00:08:10.532 8267.618 - 8318.031: 7.8726% ( 96) 00:08:10.532 8318.031 - 8368.443: 8.7515% ( 117) 00:08:10.532 8368.443 - 8418.855: 9.8558% ( 147) 00:08:10.532 8418.855 - 8469.268: 10.6445% ( 105) 00:08:10.532 8469.268 - 8519.680: 11.7638% ( 149) 00:08:10.532 8519.680 - 8570.092: 13.0334% ( 169) 00:08:10.532 8570.092 - 8620.505: 14.3104% ( 170) 00:08:10.532 8620.505 - 8670.917: 15.5950% ( 171) 00:08:10.532 8670.917 - 8721.329: 17.0222% ( 190) 00:08:10.532 8721.329 - 8771.742: 18.7200% ( 226) 00:08:10.532 8771.742 - 8822.154: 20.5454% ( 243) 00:08:10.532 8822.154 - 8872.566: 22.5511% ( 267) 00:08:10.532 8872.566 - 8922.978: 24.6319% ( 277) 00:08:10.532 8922.978 - 8973.391: 26.8254% ( 292) 00:08:10.532 8973.391 - 9023.803: 29.1917% ( 315) 00:08:10.532 9023.803 - 9074.215: 31.8209% ( 350) 00:08:10.532 9074.215 - 9124.628: 34.6529% ( 377) 00:08:10.532 9124.628 - 9175.040: 37.4700% ( 375) 00:08:10.532 9175.040 - 9225.452: 40.8278% ( 447) 00:08:10.532 9225.452 - 9275.865: 44.7416% ( 521) 00:08:10.532 9275.865 - 9326.277: 48.3023% ( 474) 00:08:10.532 9326.277 - 9376.689: 51.3296% ( 403) 00:08:10.532 9376.689 - 9427.102: 54.3870% ( 407) 00:08:10.532 9427.102 - 9477.514: 57.3543% ( 395) 00:08:10.532 9477.514 - 9527.926: 60.4868% ( 417) 00:08:10.532 9527.926 - 9578.338: 63.4240% ( 391) 00:08:10.532 9578.338 - 9628.751: 66.1208% ( 359) 00:08:10.532 9628.751 - 9679.163: 68.0889% ( 262) 00:08:10.532 9679.163 - 9729.575: 69.7341% ( 219) 00:08:10.532 9729.575 - 9779.988: 71.1614% ( 190) 00:08:10.532 9779.988 - 9830.400: 72.9117% ( 233) 00:08:10.532 9830.400 - 9880.812: 74.1286% ( 162) 00:08:10.532 9880.812 - 9931.225: 75.0225% ( 119) 00:08:10.532 9931.225 - 9981.637: 75.7888% ( 102) 00:08:10.532 9981.637 - 10032.049: 76.7127% ( 123) 00:08:10.532 10032.049 - 10082.462: 77.7269% ( 135) 00:08:10.532 10082.462 - 10132.874: 78.4480% ( 96) 00:08:10.532 10132.874 - 10183.286: 79.1617% ( 95) 00:08:10.532 10183.286 - 10233.698: 79.8978% ( 98) 00:08:10.532 10233.698 - 10284.111: 80.7166% ( 109) 00:08:10.532 10284.111 - 10334.523: 81.5204% ( 107) 00:08:10.532 10334.523 - 10384.935: 82.3092% ( 105) 00:08:10.532 10384.935 - 10435.348: 83.1130% ( 107) 00:08:10.532 10435.348 - 10485.760: 83.6839% ( 76) 00:08:10.532 10485.760 - 10536.172: 84.1722% ( 65) 00:08:10.532 10536.172 - 10586.585: 84.8407% ( 89) 00:08:10.532 10586.585 - 10636.997: 85.4718% ( 84) 00:08:10.532 10636.997 - 10687.409: 86.0727% ( 80) 00:08:10.532 10687.409 - 10737.822: 86.5835% ( 68) 00:08:10.532 10737.822 - 10788.234: 87.0117% ( 57) 00:08:10.532 10788.234 - 10838.646: 87.4023% ( 52) 00:08:10.532 10838.646 - 10889.058: 87.8831% ( 64) 00:08:10.532 10889.058 - 10939.471: 88.3714% ( 65) 00:08:10.532 10939.471 - 10989.883: 88.7245% ( 47) 00:08:10.532 10989.883 - 11040.295: 89.1226% ( 53) 00:08:10.532 11040.295 - 11090.708: 89.3930% ( 36) 00:08:10.532 11090.708 - 11141.120: 89.6710% ( 37) 00:08:10.532 11141.120 - 11191.532: 89.9489% ( 37) 00:08:10.532 11191.532 - 11241.945: 90.2344% ( 38) 00:08:10.532 11241.945 - 11292.357: 90.4447% ( 28) 00:08:10.532 11292.357 - 11342.769: 90.6250% ( 24) 00:08:10.532 11342.769 - 11393.182: 90.9555% ( 44) 00:08:10.532 11393.182 - 11443.594: 91.2184% ( 35) 00:08:10.532 11443.594 - 11494.006: 91.5715% ( 47) 00:08:10.532 11494.006 - 11544.418: 91.9471% ( 50) 00:08:10.532 11544.418 - 11594.831: 92.4354% ( 65) 00:08:10.532 11594.831 - 11645.243: 92.7509% ( 42) 00:08:10.532 11645.243 - 11695.655: 93.2392% ( 65) 00:08:10.532 11695.655 - 11746.068: 93.5096% ( 36) 00:08:10.532 11746.068 - 11796.480: 93.7725% ( 35) 00:08:10.532 11796.480 - 11846.892: 94.0204% ( 33) 00:08:10.532 11846.892 - 11897.305: 94.2233% ( 27) 00:08:10.532 11897.305 - 11947.717: 94.4486% ( 30) 00:08:10.532 11947.717 - 11998.129: 94.6439% ( 26) 00:08:10.532 11998.129 - 12048.542: 94.9144% ( 36) 00:08:10.532 12048.542 - 12098.954: 95.0721% ( 21) 00:08:10.532 12098.954 - 12149.366: 95.2148% ( 19) 00:08:10.532 12149.366 - 12199.778: 95.3425% ( 17) 00:08:10.532 12199.778 - 12250.191: 95.5228% ( 24) 00:08:10.532 12250.191 - 12300.603: 95.7257% ( 27) 00:08:10.532 12300.603 - 12351.015: 95.8684% ( 19) 00:08:10.532 12351.015 - 12401.428: 96.0712% ( 27) 00:08:10.532 12401.428 - 12451.840: 96.1839% ( 15) 00:08:10.532 12451.840 - 12502.252: 96.2816% ( 13) 00:08:10.532 12502.252 - 12552.665: 96.4017% ( 16) 00:08:10.532 12552.665 - 12603.077: 96.4769% ( 10) 00:08:10.532 12603.077 - 12653.489: 96.5294% ( 7) 00:08:10.532 12653.489 - 12703.902: 96.5895% ( 8) 00:08:10.532 12703.902 - 12754.314: 96.6346% ( 6) 00:08:10.532 12754.314 - 12804.726: 96.6722% ( 5) 00:08:10.532 12804.726 - 12855.138: 96.7849% ( 15) 00:08:10.533 12855.138 - 12905.551: 96.8600% ( 10) 00:08:10.533 12905.551 - 13006.375: 97.0252% ( 22) 00:08:10.533 13006.375 - 13107.200: 97.1229% ( 13) 00:08:10.533 13107.200 - 13208.025: 97.2055% ( 11) 00:08:10.533 13208.025 - 13308.849: 97.3708% ( 22) 00:08:10.533 13308.849 - 13409.674: 97.5736% ( 27) 00:08:10.533 13409.674 - 13510.498: 97.8441% ( 36) 00:08:10.533 13510.498 - 13611.323: 98.1370% ( 39) 00:08:10.533 13611.323 - 13712.148: 98.3849% ( 33) 00:08:10.533 13712.148 - 13812.972: 98.5427% ( 21) 00:08:10.533 13812.972 - 13913.797: 98.6929% ( 20) 00:08:10.533 13913.797 - 14014.622: 98.7981% ( 14) 00:08:10.533 14014.622 - 14115.446: 98.8957% ( 13) 00:08:10.533 14115.446 - 14216.271: 98.9709% ( 10) 00:08:10.533 14216.271 - 14317.095: 99.0309% ( 8) 00:08:10.533 14317.095 - 14417.920: 99.0460% ( 2) 00:08:10.533 14417.920 - 14518.745: 99.0910% ( 6) 00:08:10.533 14518.745 - 14619.569: 99.1511% ( 8) 00:08:10.533 14619.569 - 14720.394: 99.2188% ( 9) 00:08:10.533 14720.394 - 14821.218: 99.2939% ( 10) 00:08:10.533 14821.218 - 14922.043: 99.4141% ( 16) 00:08:10.533 14922.043 - 15022.868: 99.4591% ( 6) 00:08:10.533 15022.868 - 15123.692: 99.5042% ( 6) 00:08:10.533 15123.692 - 15224.517: 99.5192% ( 2) 00:08:10.533 19660.800 - 19761.625: 99.5493% ( 4) 00:08:10.533 19761.625 - 19862.449: 99.5944% ( 6) 00:08:10.533 19862.449 - 19963.274: 99.6394% ( 6) 00:08:10.533 19963.274 - 20064.098: 99.6920% ( 7) 00:08:10.533 20064.098 - 20164.923: 99.7371% ( 6) 00:08:10.533 20164.923 - 20265.748: 99.7897% ( 7) 00:08:10.533 20265.748 - 20366.572: 99.8347% ( 6) 00:08:10.533 20366.572 - 20467.397: 99.8873% ( 7) 00:08:10.533 20467.397 - 20568.222: 99.9399% ( 7) 00:08:10.533 20568.222 - 20669.046: 99.9850% ( 6) 00:08:10.533 20669.046 - 20769.871: 100.0000% ( 2) 00:08:10.533 00:08:10.533 ************************************ 00:08:10.533 END TEST nvme_perf 00:08:10.533 ************************************ 00:08:10.533 23:57:00 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:10.533 00:08:10.533 real 0m2.401s 00:08:10.533 user 0m2.132s 00:08:10.533 sys 0m0.168s 00:08:10.533 23:57:00 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.533 23:57:00 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:10.791 23:57:00 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:10.791 23:57:00 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:10.791 23:57:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.791 23:57:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.791 ************************************ 00:08:10.791 START TEST nvme_hello_world 00:08:10.791 ************************************ 00:08:10.791 23:57:00 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:10.791 Initializing NVMe Controllers 00:08:10.791 Attached to 0000:00:13.0 00:08:10.791 Namespace ID: 1 size: 1GB 00:08:10.791 Attached to 0000:00:10.0 00:08:10.791 Namespace ID: 1 size: 6GB 00:08:10.791 Attached to 0000:00:11.0 00:08:10.791 Namespace ID: 1 size: 5GB 00:08:10.791 Attached to 0000:00:12.0 00:08:10.791 Namespace ID: 1 size: 4GB 00:08:10.791 Namespace ID: 2 size: 4GB 00:08:10.791 Namespace ID: 3 size: 4GB 00:08:10.791 Initialization complete. 00:08:10.791 INFO: using host memory buffer for IO 00:08:10.791 Hello world! 00:08:10.791 INFO: using host memory buffer for IO 00:08:10.791 Hello world! 00:08:10.791 INFO: using host memory buffer for IO 00:08:10.791 Hello world! 00:08:10.791 INFO: using host memory buffer for IO 00:08:10.791 Hello world! 00:08:10.791 INFO: using host memory buffer for IO 00:08:10.791 Hello world! 00:08:10.791 INFO: using host memory buffer for IO 00:08:10.791 Hello world! 00:08:10.791 ************************************ 00:08:10.791 END TEST nvme_hello_world 00:08:10.792 ************************************ 00:08:10.792 00:08:10.792 real 0m0.181s 00:08:10.792 user 0m0.063s 00:08:10.792 sys 0m0.075s 00:08:10.792 23:57:01 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.792 23:57:01 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:10.792 23:57:01 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:10.792 23:57:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.792 23:57:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.792 23:57:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.792 ************************************ 00:08:10.792 START TEST nvme_sgl 00:08:10.792 ************************************ 00:08:10.792 23:57:01 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:11.050 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:11.050 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:11.050 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:11.050 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:11.050 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:11.050 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:11.050 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:11.050 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:11.050 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:11.050 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:11.050 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:11.050 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:11.050 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:11.050 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:11.050 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:11.050 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:11.050 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:11.050 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:11.050 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:11.050 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:11.050 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:11.050 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:11.050 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:11.050 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:11.050 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:11.050 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:11.050 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:11.050 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:11.050 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:11.050 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:11.050 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:11.050 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:11.050 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:11.050 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:11.050 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:11.050 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:11.050 NVMe Readv/Writev Request test 00:08:11.050 Attached to 0000:00:13.0 00:08:11.050 Attached to 0000:00:10.0 00:08:11.050 Attached to 0000:00:11.0 00:08:11.050 Attached to 0000:00:12.0 00:08:11.050 0000:00:10.0: build_io_request_2 test passed 00:08:11.050 0000:00:10.0: build_io_request_4 test passed 00:08:11.050 0000:00:10.0: build_io_request_5 test passed 00:08:11.051 0000:00:10.0: build_io_request_6 test passed 00:08:11.051 0000:00:10.0: build_io_request_7 test passed 00:08:11.051 0000:00:10.0: build_io_request_10 test passed 00:08:11.051 0000:00:11.0: build_io_request_2 test passed 00:08:11.051 0000:00:11.0: build_io_request_4 test passed 00:08:11.051 0000:00:11.0: build_io_request_5 test passed 00:08:11.051 0000:00:11.0: build_io_request_6 test passed 00:08:11.051 0000:00:11.0: build_io_request_7 test passed 00:08:11.051 0000:00:11.0: build_io_request_10 test passed 00:08:11.051 Cleaning up... 00:08:11.051 00:08:11.051 real 0m0.243s 00:08:11.051 user 0m0.112s 00:08:11.051 sys 0m0.091s 00:08:11.051 23:57:01 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.051 23:57:01 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:11.051 ************************************ 00:08:11.051 END TEST nvme_sgl 00:08:11.051 ************************************ 00:08:11.309 23:57:01 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:11.309 23:57:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.309 23:57:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.309 23:57:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 ************************************ 00:08:11.309 START TEST nvme_e2edp 00:08:11.309 ************************************ 00:08:11.309 23:57:01 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:11.309 NVMe Write/Read with End-to-End data protection test 00:08:11.309 Attached to 0000:00:13.0 00:08:11.309 Attached to 0000:00:10.0 00:08:11.309 Attached to 0000:00:11.0 00:08:11.309 Attached to 0000:00:12.0 00:08:11.309 Cleaning up... 00:08:11.309 00:08:11.309 real 0m0.169s 00:08:11.309 user 0m0.051s 00:08:11.309 sys 0m0.073s 00:08:11.309 23:57:01 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.309 23:57:01 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 ************************************ 00:08:11.309 END TEST nvme_e2edp 00:08:11.309 ************************************ 00:08:11.309 23:57:01 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:11.309 23:57:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.309 23:57:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.309 23:57:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.567 ************************************ 00:08:11.567 START TEST nvme_reserve 00:08:11.567 ************************************ 00:08:11.567 23:57:01 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:11.567 ===================================================== 00:08:11.567 NVMe Controller at PCI bus 0, device 19, function 0 00:08:11.567 ===================================================== 00:08:11.567 Reservations: Not Supported 00:08:11.567 ===================================================== 00:08:11.567 NVMe Controller at PCI bus 0, device 16, function 0 00:08:11.567 ===================================================== 00:08:11.567 Reservations: Not Supported 00:08:11.567 ===================================================== 00:08:11.567 NVMe Controller at PCI bus 0, device 17, function 0 00:08:11.567 ===================================================== 00:08:11.567 Reservations: Not Supported 00:08:11.567 ===================================================== 00:08:11.567 NVMe Controller at PCI bus 0, device 18, function 0 00:08:11.567 ===================================================== 00:08:11.567 Reservations: Not Supported 00:08:11.567 Reservation test passed 00:08:11.567 ************************************ 00:08:11.567 END TEST nvme_reserve 00:08:11.567 ************************************ 00:08:11.567 00:08:11.567 real 0m0.158s 00:08:11.567 user 0m0.040s 00:08:11.567 sys 0m0.077s 00:08:11.567 23:57:01 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.567 23:57:01 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:11.567 23:57:01 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:11.567 23:57:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.567 23:57:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.567 23:57:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.567 ************************************ 00:08:11.567 START TEST nvme_err_injection 00:08:11.567 ************************************ 00:08:11.567 23:57:01 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:11.825 NVMe Error Injection test 00:08:11.825 Attached to 0000:00:13.0 00:08:11.825 Attached to 0000:00:10.0 00:08:11.825 Attached to 0000:00:11.0 00:08:11.825 Attached to 0000:00:12.0 00:08:11.825 0000:00:13.0: get features failed as expected 00:08:11.825 0000:00:10.0: get features failed as expected 00:08:11.825 0000:00:11.0: get features failed as expected 00:08:11.825 0000:00:12.0: get features failed as expected 00:08:11.825 0000:00:13.0: get features successfully as expected 00:08:11.825 0000:00:10.0: get features successfully as expected 00:08:11.825 0000:00:11.0: get features successfully as expected 00:08:11.825 0000:00:12.0: get features successfully as expected 00:08:11.825 0000:00:12.0: read failed as expected 00:08:11.825 0000:00:13.0: read failed as expected 00:08:11.825 0000:00:10.0: read failed as expected 00:08:11.825 0000:00:11.0: read failed as expected 00:08:11.825 0000:00:11.0: read successfully as expected 00:08:11.825 0000:00:12.0: read successfully as expected 00:08:11.825 0000:00:13.0: read successfully as expected 00:08:11.825 0000:00:10.0: read successfully as expected 00:08:11.825 Cleaning up... 00:08:11.825 ************************************ 00:08:11.825 END TEST nvme_err_injection 00:08:11.825 ************************************ 00:08:11.825 00:08:11.825 real 0m0.170s 00:08:11.825 user 0m0.054s 00:08:11.825 sys 0m0.080s 00:08:11.825 23:57:02 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.825 23:57:02 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:11.825 23:57:02 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:11.825 23:57:02 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:11.825 23:57:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.825 23:57:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.825 ************************************ 00:08:11.825 START TEST nvme_overhead 00:08:11.825 ************************************ 00:08:11.825 23:57:02 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:13.244 Initializing NVMe Controllers 00:08:13.244 Attached to 0000:00:13.0 00:08:13.244 Attached to 0000:00:10.0 00:08:13.244 Attached to 0000:00:11.0 00:08:13.244 Attached to 0000:00:12.0 00:08:13.244 Initialization complete. Launching workers. 00:08:13.244 submit (in ns) avg, min, max = 12308.1, 9740.0, 325586.9 00:08:13.244 complete (in ns) avg, min, max = 8114.8, 7310.0, 91984.6 00:08:13.244 00:08:13.244 Submit histogram 00:08:13.244 ================ 00:08:13.244 Range in us Cumulative Count 00:08:13.244 9.698 - 9.748: 0.0237% ( 1) 00:08:13.244 9.748 - 9.797: 0.0474% ( 1) 00:08:13.244 10.388 - 10.437: 0.0711% ( 1) 00:08:13.244 10.486 - 10.535: 0.0947% ( 1) 00:08:13.244 10.535 - 10.585: 0.1184% ( 1) 00:08:13.244 10.585 - 10.634: 0.1421% ( 1) 00:08:13.244 10.634 - 10.683: 0.1658% ( 1) 00:08:13.244 10.683 - 10.732: 0.2132% ( 2) 00:08:13.244 10.732 - 10.782: 0.2605% ( 2) 00:08:13.244 10.929 - 10.978: 0.3790% ( 5) 00:08:13.244 10.978 - 11.028: 0.6395% ( 11) 00:08:13.244 11.028 - 11.077: 1.3501% ( 30) 00:08:13.244 11.077 - 11.126: 2.3212% ( 41) 00:08:13.244 11.126 - 11.175: 4.7371% ( 102) 00:08:13.244 11.175 - 11.225: 8.9057% ( 176) 00:08:13.244 11.225 - 11.274: 14.8508% ( 251) 00:08:13.244 11.274 - 11.323: 23.8276% ( 379) 00:08:13.244 11.323 - 11.372: 34.0597% ( 432) 00:08:13.244 11.372 - 11.422: 43.7470% ( 409) 00:08:13.244 11.422 - 11.471: 52.2975% ( 361) 00:08:13.244 11.471 - 11.520: 59.8058% ( 317) 00:08:13.244 11.520 - 11.569: 66.1535% ( 268) 00:08:13.244 11.569 - 11.618: 70.7721% ( 195) 00:08:13.244 11.618 - 11.668: 73.4960% ( 115) 00:08:13.244 11.668 - 11.717: 75.2487% ( 74) 00:08:13.244 11.717 - 11.766: 76.7172% ( 62) 00:08:13.244 11.766 - 11.815: 78.0910% ( 58) 00:08:13.244 11.815 - 11.865: 78.8015% ( 30) 00:08:13.244 11.865 - 11.914: 79.6779% ( 37) 00:08:13.244 11.914 - 11.963: 80.4832% ( 34) 00:08:13.244 11.963 - 12.012: 81.4306% ( 40) 00:08:13.244 12.012 - 12.062: 82.2833% ( 36) 00:08:13.244 12.062 - 12.111: 83.2781% ( 42) 00:08:13.244 12.111 - 12.160: 84.2492% ( 41) 00:08:13.244 12.160 - 12.209: 85.2440% ( 42) 00:08:13.244 12.209 - 12.258: 86.0019% ( 32) 00:08:13.244 12.258 - 12.308: 86.6177% ( 26) 00:08:13.244 12.308 - 12.357: 87.1388% ( 22) 00:08:13.244 12.357 - 12.406: 87.5178% ( 16) 00:08:13.244 12.406 - 12.455: 87.7546% ( 10) 00:08:13.244 12.455 - 12.505: 87.9915% ( 10) 00:08:13.244 12.505 - 12.554: 88.1810% ( 8) 00:08:13.244 12.554 - 12.603: 88.3941% ( 9) 00:08:13.244 12.603 - 12.702: 88.7494% ( 15) 00:08:13.244 12.702 - 12.800: 88.8441% ( 4) 00:08:13.244 12.800 - 12.898: 88.9863% ( 6) 00:08:13.244 12.898 - 12.997: 89.0573% ( 3) 00:08:13.244 12.997 - 13.095: 89.1047% ( 2) 00:08:13.244 13.095 - 13.194: 89.1284% ( 1) 00:08:13.244 13.194 - 13.292: 89.2231% ( 4) 00:08:13.244 13.292 - 13.391: 89.4363% ( 9) 00:08:13.244 13.391 - 13.489: 89.5547% ( 5) 00:08:13.244 13.489 - 13.588: 89.6495% ( 4) 00:08:13.244 13.588 - 13.686: 89.6968% ( 2) 00:08:13.244 13.686 - 13.785: 89.7679% ( 3) 00:08:13.244 13.785 - 13.883: 89.8389% ( 3) 00:08:13.244 13.883 - 13.982: 89.9100% ( 3) 00:08:13.244 13.982 - 14.080: 89.9574% ( 2) 00:08:13.244 14.080 - 14.178: 90.0758% ( 5) 00:08:13.244 14.178 - 14.277: 90.1468% ( 3) 00:08:13.244 14.277 - 14.375: 90.2416% ( 4) 00:08:13.244 14.375 - 14.474: 90.5258% ( 12) 00:08:13.244 14.474 - 14.572: 90.6206% ( 4) 00:08:13.244 14.572 - 14.671: 90.8337% ( 9) 00:08:13.244 14.671 - 14.769: 91.2127% ( 16) 00:08:13.244 14.769 - 14.868: 91.6153% ( 17) 00:08:13.244 14.868 - 14.966: 91.7338% ( 5) 00:08:13.244 14.966 - 15.065: 91.9233% ( 8) 00:08:13.244 15.065 - 15.163: 92.0417% ( 5) 00:08:13.244 15.163 - 15.262: 92.2312% ( 8) 00:08:13.244 15.262 - 15.360: 92.4207% ( 8) 00:08:13.244 15.360 - 15.458: 92.6101% ( 8) 00:08:13.244 15.458 - 15.557: 92.7049% ( 4) 00:08:13.244 15.557 - 15.655: 92.9180% ( 9) 00:08:13.244 15.655 - 15.754: 93.0838% ( 7) 00:08:13.244 15.754 - 15.852: 93.2733% ( 8) 00:08:13.244 15.852 - 15.951: 93.5102% ( 10) 00:08:13.244 15.951 - 16.049: 93.6523% ( 6) 00:08:13.244 16.049 - 16.148: 93.7707% ( 5) 00:08:13.244 16.148 - 16.246: 93.9839% ( 9) 00:08:13.244 16.246 - 16.345: 94.0786% ( 4) 00:08:13.244 16.345 - 16.443: 94.2207% ( 6) 00:08:13.244 16.443 - 16.542: 94.3155% ( 4) 00:08:13.244 16.542 - 16.640: 94.4576% ( 6) 00:08:13.244 16.640 - 16.738: 94.5523% ( 4) 00:08:13.244 16.738 - 16.837: 94.7181% ( 7) 00:08:13.244 16.837 - 16.935: 94.8366% ( 5) 00:08:13.244 16.935 - 17.034: 95.0024% ( 7) 00:08:13.244 17.034 - 17.132: 95.1445% ( 6) 00:08:13.244 17.132 - 17.231: 95.4287% ( 12) 00:08:13.244 17.231 - 17.329: 95.5471% ( 5) 00:08:13.244 17.329 - 17.428: 95.7840% ( 10) 00:08:13.244 17.428 - 17.526: 96.1156% ( 14) 00:08:13.244 17.526 - 17.625: 96.3288% ( 9) 00:08:13.244 17.625 - 17.723: 96.4235% ( 4) 00:08:13.244 17.723 - 17.822: 96.5893% ( 7) 00:08:13.244 17.822 - 17.920: 96.9209% ( 14) 00:08:13.244 17.920 - 18.018: 97.0630% ( 6) 00:08:13.244 18.018 - 18.117: 97.2288% ( 7) 00:08:13.244 18.117 - 18.215: 97.2762% ( 2) 00:08:13.244 18.215 - 18.314: 97.4420% ( 7) 00:08:13.244 18.314 - 18.412: 97.4893% ( 2) 00:08:13.244 18.412 - 18.511: 97.7499% ( 11) 00:08:13.244 18.511 - 18.609: 97.8209% ( 3) 00:08:13.244 18.609 - 18.708: 97.9157% ( 4) 00:08:13.244 18.708 - 18.806: 97.9394% ( 1) 00:08:13.244 18.806 - 18.905: 97.9631% ( 1) 00:08:13.244 18.905 - 19.003: 98.0341% ( 3) 00:08:13.244 19.003 - 19.102: 98.1052% ( 3) 00:08:13.244 19.102 - 19.200: 98.1999% ( 4) 00:08:13.244 19.200 - 19.298: 98.2473% ( 2) 00:08:13.244 19.298 - 19.397: 98.2946% ( 2) 00:08:13.244 19.495 - 19.594: 98.3420% ( 2) 00:08:13.244 19.594 - 19.692: 98.3657% ( 1) 00:08:13.244 19.692 - 19.791: 98.4131% ( 2) 00:08:13.244 19.791 - 19.889: 98.4368% ( 1) 00:08:13.244 19.889 - 19.988: 98.5315% ( 4) 00:08:13.244 20.086 - 20.185: 98.5789% ( 2) 00:08:13.244 20.185 - 20.283: 98.6262% ( 2) 00:08:13.244 20.283 - 20.382: 98.6499% ( 1) 00:08:13.244 20.382 - 20.480: 98.6973% ( 2) 00:08:13.244 20.775 - 20.874: 98.7210% ( 1) 00:08:13.244 20.874 - 20.972: 98.7447% ( 1) 00:08:13.244 20.972 - 21.071: 98.7684% ( 1) 00:08:13.244 21.169 - 21.268: 98.7920% ( 1) 00:08:13.244 21.760 - 21.858: 98.8157% ( 1) 00:08:13.244 21.957 - 22.055: 98.8394% ( 1) 00:08:13.244 22.252 - 22.351: 98.8631% ( 1) 00:08:13.244 22.449 - 22.548: 98.8868% ( 1) 00:08:13.244 22.548 - 22.646: 98.9342% ( 2) 00:08:13.244 22.745 - 22.843: 98.9578% ( 1) 00:08:13.244 23.138 - 23.237: 98.9815% ( 1) 00:08:13.244 23.434 - 23.532: 99.0052% ( 1) 00:08:13.244 23.729 - 23.828: 99.0526% ( 2) 00:08:13.244 25.600 - 25.797: 99.0763% ( 1) 00:08:13.244 26.978 - 27.175: 99.1000% ( 1) 00:08:13.244 30.326 - 30.523: 99.1473% ( 2) 00:08:13.244 30.523 - 30.720: 99.2184% ( 3) 00:08:13.244 30.720 - 30.917: 99.2658% ( 2) 00:08:13.244 30.917 - 31.114: 99.3368% ( 3) 00:08:13.244 31.114 - 31.311: 99.4789% ( 6) 00:08:13.244 31.508 - 31.705: 99.5500% ( 3) 00:08:13.245 31.705 - 31.902: 99.5737% ( 1) 00:08:13.245 31.902 - 32.098: 99.5973% ( 1) 00:08:13.245 33.477 - 33.674: 99.6210% ( 1) 00:08:13.245 33.871 - 34.068: 99.6447% ( 1) 00:08:13.245 35.643 - 35.840: 99.6684% ( 1) 00:08:13.245 36.234 - 36.431: 99.6921% ( 1) 00:08:13.245 36.628 - 36.825: 99.7158% ( 1) 00:08:13.245 37.022 - 37.218: 99.7395% ( 1) 00:08:13.245 37.612 - 37.809: 99.7631% ( 1) 00:08:13.245 41.157 - 41.354: 99.7868% ( 1) 00:08:13.245 41.551 - 41.748: 99.8105% ( 1) 00:08:13.245 47.458 - 47.655: 99.8342% ( 1) 00:08:13.245 48.246 - 48.443: 99.8579% ( 1) 00:08:13.245 53.169 - 53.563: 99.8816% ( 1) 00:08:13.245 54.351 - 54.745: 99.9053% ( 1) 00:08:13.245 56.714 - 57.108: 99.9526% ( 2) 00:08:13.245 73.649 - 74.043: 99.9763% ( 1) 00:08:13.245 324.529 - 326.105: 100.0000% ( 1) 00:08:13.245 00:08:13.245 Complete histogram 00:08:13.245 ================== 00:08:13.245 Range in us Cumulative Count 00:08:13.245 7.286 - 7.335: 0.0947% ( 4) 00:08:13.245 7.335 - 7.385: 0.9474% ( 36) 00:08:13.245 7.385 - 7.434: 5.9687% ( 212) 00:08:13.245 7.434 - 7.483: 18.9484% ( 548) 00:08:13.245 7.483 - 7.532: 34.9834% ( 677) 00:08:13.245 7.532 - 7.582: 49.4315% ( 610) 00:08:13.245 7.582 - 7.631: 59.0005% ( 404) 00:08:13.245 7.631 - 7.680: 66.1061% ( 300) 00:08:13.245 7.680 - 7.729: 69.9905% ( 164) 00:08:13.245 7.729 - 7.778: 72.1933% ( 93) 00:08:13.245 7.778 - 7.828: 73.4486% ( 53) 00:08:13.245 7.828 - 7.877: 74.2776% ( 35) 00:08:13.245 7.877 - 7.926: 75.1540% ( 37) 00:08:13.245 7.926 - 7.975: 75.7935% ( 27) 00:08:13.245 7.975 - 8.025: 76.8830% ( 46) 00:08:13.245 8.025 - 8.074: 79.3226% ( 103) 00:08:13.245 8.074 - 8.123: 82.0701% ( 116) 00:08:13.245 8.123 - 8.172: 84.4150% ( 99) 00:08:13.245 8.172 - 8.222: 86.5940% ( 92) 00:08:13.245 8.222 - 8.271: 88.3941% ( 76) 00:08:13.245 8.271 - 8.320: 89.8863% ( 63) 00:08:13.245 8.320 - 8.369: 91.1180% ( 52) 00:08:13.245 8.369 - 8.418: 91.9469% ( 35) 00:08:13.245 8.418 - 8.468: 92.3496% ( 17) 00:08:13.245 8.468 - 8.517: 92.8470% ( 21) 00:08:13.245 8.517 - 8.566: 93.0365% ( 8) 00:08:13.245 8.566 - 8.615: 93.3207% ( 12) 00:08:13.245 8.615 - 8.665: 93.5576% ( 10) 00:08:13.245 8.665 - 8.714: 93.7944% ( 10) 00:08:13.245 8.714 - 8.763: 94.0076% ( 9) 00:08:13.245 8.763 - 8.812: 94.2918% ( 12) 00:08:13.245 8.812 - 8.862: 94.5287% ( 10) 00:08:13.245 8.862 - 8.911: 94.7892% ( 11) 00:08:13.245 8.911 - 8.960: 94.8603% ( 3) 00:08:13.245 8.960 - 9.009: 95.0497% ( 8) 00:08:13.245 9.009 - 9.058: 95.1682% ( 5) 00:08:13.245 9.058 - 9.108: 95.2392% ( 3) 00:08:13.245 9.108 - 9.157: 95.2629% ( 1) 00:08:13.245 9.157 - 9.206: 95.3577% ( 4) 00:08:13.245 9.206 - 9.255: 95.3813% ( 1) 00:08:13.245 9.255 - 9.305: 95.4050% ( 1) 00:08:13.245 9.305 - 9.354: 95.4287% ( 1) 00:08:13.245 9.354 - 9.403: 95.4524% ( 1) 00:08:13.245 9.403 - 9.452: 95.4761% ( 1) 00:08:13.245 9.502 - 9.551: 95.5471% ( 3) 00:08:13.245 9.600 - 9.649: 95.5945% ( 2) 00:08:13.245 9.649 - 9.698: 95.6419% ( 2) 00:08:13.245 9.698 - 9.748: 95.6656% ( 1) 00:08:13.245 9.895 - 9.945: 95.7129% ( 2) 00:08:13.245 9.945 - 9.994: 95.7366% ( 1) 00:08:13.245 9.994 - 10.043: 95.7840% ( 2) 00:08:13.245 10.043 - 10.092: 95.8314% ( 2) 00:08:13.245 10.142 - 10.191: 95.9024% ( 3) 00:08:13.245 10.191 - 10.240: 95.9261% ( 1) 00:08:13.245 10.240 - 10.289: 95.9735% ( 2) 00:08:13.245 10.289 - 10.338: 95.9972% ( 1) 00:08:13.245 10.338 - 10.388: 96.0445% ( 2) 00:08:13.245 10.388 - 10.437: 96.1156% ( 3) 00:08:13.245 10.437 - 10.486: 96.1630% ( 2) 00:08:13.245 10.486 - 10.535: 96.1866% ( 1) 00:08:13.245 10.585 - 10.634: 96.2577% ( 3) 00:08:13.245 10.634 - 10.683: 96.2814% ( 1) 00:08:13.245 10.782 - 10.831: 96.3051% ( 1) 00:08:13.245 10.831 - 10.880: 96.3524% ( 2) 00:08:13.245 10.880 - 10.929: 96.3998% ( 2) 00:08:13.245 10.929 - 10.978: 96.4235% ( 1) 00:08:13.245 10.978 - 11.028: 96.4472% ( 1) 00:08:13.245 11.028 - 11.077: 96.4709% ( 1) 00:08:13.245 11.126 - 11.175: 96.5182% ( 2) 00:08:13.245 11.175 - 11.225: 96.5893% ( 3) 00:08:13.245 11.225 - 11.274: 96.6367% ( 2) 00:08:13.245 11.274 - 11.323: 96.6604% ( 1) 00:08:13.245 11.323 - 11.372: 96.7077% ( 2) 00:08:13.245 11.372 - 11.422: 96.8025% ( 4) 00:08:13.245 11.422 - 11.471: 96.8261% ( 1) 00:08:13.245 11.471 - 11.520: 96.8735% ( 2) 00:08:13.245 11.520 - 11.569: 96.8972% ( 1) 00:08:13.245 11.569 - 11.618: 96.9683% ( 3) 00:08:13.245 11.618 - 11.668: 97.0156% ( 2) 00:08:13.245 11.717 - 11.766: 97.0630% ( 2) 00:08:13.245 11.815 - 11.865: 97.0867% ( 1) 00:08:13.245 11.865 - 11.914: 97.1104% ( 1) 00:08:13.245 12.012 - 12.062: 97.1341% ( 1) 00:08:13.245 12.062 - 12.111: 97.1814% ( 2) 00:08:13.245 12.160 - 12.209: 97.2051% ( 1) 00:08:13.245 12.357 - 12.406: 97.2288% ( 1) 00:08:13.245 12.455 - 12.505: 97.2762% ( 2) 00:08:13.245 12.702 - 12.800: 97.3235% ( 2) 00:08:13.245 12.800 - 12.898: 97.3709% ( 2) 00:08:13.245 12.997 - 13.095: 97.4420% ( 3) 00:08:13.245 13.095 - 13.194: 97.5604% ( 5) 00:08:13.245 13.194 - 13.292: 97.6551% ( 4) 00:08:13.245 13.292 - 13.391: 97.7736% ( 5) 00:08:13.245 13.391 - 13.489: 97.8920% ( 5) 00:08:13.245 13.489 - 13.588: 98.0578% ( 7) 00:08:13.245 13.588 - 13.686: 98.1762% ( 5) 00:08:13.245 13.686 - 13.785: 98.2946% ( 5) 00:08:13.245 13.785 - 13.883: 98.3657% ( 3) 00:08:13.245 13.883 - 13.982: 98.4368% ( 3) 00:08:13.245 13.982 - 14.080: 98.4604% ( 1) 00:08:13.245 14.080 - 14.178: 98.5315% ( 3) 00:08:13.245 14.178 - 14.277: 98.5789% ( 2) 00:08:13.245 14.277 - 14.375: 98.6026% ( 1) 00:08:13.245 14.375 - 14.474: 98.6262% ( 1) 00:08:13.245 14.474 - 14.572: 98.6499% ( 1) 00:08:13.245 14.572 - 14.671: 98.6736% ( 1) 00:08:13.245 14.671 - 14.769: 98.7210% ( 2) 00:08:13.245 14.868 - 14.966: 98.7447% ( 1) 00:08:13.245 14.966 - 15.065: 98.7684% ( 1) 00:08:13.245 15.163 - 15.262: 98.8157% ( 2) 00:08:13.245 15.951 - 16.049: 98.8394% ( 1) 00:08:13.245 16.148 - 16.246: 98.8631% ( 1) 00:08:13.245 16.443 - 16.542: 98.8868% ( 1) 00:08:13.245 16.640 - 16.738: 98.9105% ( 1) 00:08:13.245 16.738 - 16.837: 98.9342% ( 1) 00:08:13.245 17.034 - 17.132: 98.9578% ( 1) 00:08:13.245 17.132 - 17.231: 98.9815% ( 1) 00:08:13.245 17.231 - 17.329: 99.0052% ( 1) 00:08:13.245 17.526 - 17.625: 99.0289% ( 1) 00:08:13.245 18.117 - 18.215: 99.0526% ( 1) 00:08:13.245 19.495 - 19.594: 99.0763% ( 1) 00:08:13.245 19.791 - 19.889: 99.1000% ( 1) 00:08:13.245 22.154 - 22.252: 99.1236% ( 1) 00:08:13.245 22.252 - 22.351: 99.1947% ( 3) 00:08:13.245 22.351 - 22.449: 99.2658% ( 3) 00:08:13.245 22.449 - 22.548: 99.3368% ( 3) 00:08:13.245 22.646 - 22.745: 99.3842% ( 2) 00:08:13.245 22.745 - 22.843: 99.4315% ( 2) 00:08:13.245 22.843 - 22.942: 99.4552% ( 1) 00:08:13.245 22.942 - 23.040: 99.5263% ( 3) 00:08:13.245 23.138 - 23.237: 99.5973% ( 3) 00:08:13.245 23.237 - 23.335: 99.6210% ( 1) 00:08:13.245 23.729 - 23.828: 99.6447% ( 1) 00:08:13.245 24.025 - 24.123: 99.6684% ( 1) 00:08:13.245 34.462 - 34.658: 99.6921% ( 1) 00:08:13.245 34.658 - 34.855: 99.7158% ( 1) 00:08:13.245 36.037 - 36.234: 99.7395% ( 1) 00:08:13.245 37.415 - 37.612: 99.7631% ( 1) 00:08:13.245 40.369 - 40.566: 99.7868% ( 1) 00:08:13.245 41.354 - 41.551: 99.8105% ( 1) 00:08:13.245 42.338 - 42.535: 99.8342% ( 1) 00:08:13.245 48.837 - 49.034: 99.8579% ( 1) 00:08:13.245 51.988 - 52.382: 99.8816% ( 1) 00:08:13.245 53.563 - 53.957: 99.9053% ( 1) 00:08:13.245 61.440 - 61.834: 99.9289% ( 1) 00:08:13.245 70.892 - 71.286: 99.9526% ( 1) 00:08:13.245 73.649 - 74.043: 99.9763% ( 1) 00:08:13.245 91.766 - 92.160: 100.0000% ( 1) 00:08:13.245 00:08:13.245 00:08:13.245 real 0m1.187s 00:08:13.245 user 0m1.057s 00:08:13.245 sys 0m0.072s 00:08:13.245 23:57:03 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.245 23:57:03 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:13.245 ************************************ 00:08:13.245 END TEST nvme_overhead 00:08:13.245 ************************************ 00:08:13.245 23:57:03 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:13.245 23:57:03 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:13.245 23:57:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.245 23:57:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.245 ************************************ 00:08:13.245 START TEST nvme_arbitration 00:08:13.245 ************************************ 00:08:13.245 23:57:03 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:16.526 Initializing NVMe Controllers 00:08:16.526 Attached to 0000:00:13.0 00:08:16.526 Attached to 0000:00:10.0 00:08:16.526 Attached to 0000:00:11.0 00:08:16.526 Attached to 0000:00:12.0 00:08:16.526 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:16.526 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:16.526 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:16.526 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:16.526 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:16.526 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:16.526 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:16.526 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:16.526 Initialization complete. Launching workers. 00:08:16.526 Starting thread on core 1 with urgent priority queue 00:08:16.526 Starting thread on core 2 with urgent priority queue 00:08:16.526 Starting thread on core 3 with urgent priority queue 00:08:16.526 Starting thread on core 0 with urgent priority queue 00:08:16.526 QEMU NVMe Ctrl (12343 ) core 0: 6016.00 IO/s 16.62 secs/100000 ios 00:08:16.526 QEMU NVMe Ctrl (12342 ) core 0: 6016.00 IO/s 16.62 secs/100000 ios 00:08:16.526 QEMU NVMe Ctrl (12340 ) core 1: 6016.00 IO/s 16.62 secs/100000 ios 00:08:16.526 QEMU NVMe Ctrl (12342 ) core 1: 6016.00 IO/s 16.62 secs/100000 ios 00:08:16.526 QEMU NVMe Ctrl (12341 ) core 2: 5845.33 IO/s 17.11 secs/100000 ios 00:08:16.526 QEMU NVMe Ctrl (12342 ) core 3: 5653.33 IO/s 17.69 secs/100000 ios 00:08:16.526 ======================================================== 00:08:16.526 00:08:16.526 00:08:16.526 real 0m3.201s 00:08:16.526 user 0m9.031s 00:08:16.526 sys 0m0.079s 00:08:16.526 23:57:06 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.526 23:57:06 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:16.526 ************************************ 00:08:16.526 END TEST nvme_arbitration 00:08:16.526 ************************************ 00:08:16.526 23:57:06 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:16.526 23:57:06 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:16.526 23:57:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.527 23:57:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.527 ************************************ 00:08:16.527 START TEST nvme_single_aen 00:08:16.527 ************************************ 00:08:16.527 23:57:06 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:16.527 Asynchronous Event Request test 00:08:16.527 Attached to 0000:00:13.0 00:08:16.527 Attached to 0000:00:10.0 00:08:16.527 Attached to 0000:00:11.0 00:08:16.527 Attached to 0000:00:12.0 00:08:16.527 Reset controller to setup AER completions for this process 00:08:16.527 Registering asynchronous event callbacks... 00:08:16.527 Getting orig temperature thresholds of all controllers 00:08:16.527 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:16.527 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:16.527 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:16.527 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:16.527 Setting all controllers temperature threshold low to trigger AER 00:08:16.527 Waiting for all controllers temperature threshold to be set lower 00:08:16.527 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:16.527 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:16.527 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:16.527 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:16.527 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:16.527 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:16.527 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:16.527 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:16.527 Waiting for all controllers to trigger AER and reset threshold 00:08:16.527 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:16.527 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:16.527 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:16.527 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:16.527 Cleaning up... 00:08:16.527 00:08:16.527 real 0m0.188s 00:08:16.527 user 0m0.059s 00:08:16.527 sys 0m0.084s 00:08:16.527 ************************************ 00:08:16.527 END TEST nvme_single_aen 00:08:16.527 ************************************ 00:08:16.527 23:57:06 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.527 23:57:06 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:16.527 23:57:06 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:16.527 23:57:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.527 23:57:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.527 23:57:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.527 ************************************ 00:08:16.527 START TEST nvme_doorbell_aers 00:08:16.527 ************************************ 00:08:16.527 23:57:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:16.527 23:57:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:16.527 23:57:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:16.527 23:57:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:16.527 23:57:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:16.527 23:57:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:16.527 23:57:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:16.527 23:57:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:16.527 23:57:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:16.527 23:57:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:16.785 23:57:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:16.785 23:57:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:16.785 23:57:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:16.785 23:57:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:16.785 [2024-11-20 23:57:07.193746] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:26.772 Executing: test_write_invalid_db 00:08:26.772 Waiting for AER completion... 00:08:26.772 Failure: test_write_invalid_db 00:08:26.772 00:08:26.772 Executing: test_invalid_db_write_overflow_sq 00:08:26.772 Waiting for AER completion... 00:08:26.772 Failure: test_invalid_db_write_overflow_sq 00:08:26.772 00:08:26.772 Executing: test_invalid_db_write_overflow_cq 00:08:26.772 Waiting for AER completion... 00:08:26.772 Failure: test_invalid_db_write_overflow_cq 00:08:26.772 00:08:26.772 23:57:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:26.772 23:57:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:26.772 [2024-11-20 23:57:17.184304] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:36.746 Executing: test_write_invalid_db 00:08:36.746 Waiting for AER completion... 00:08:36.746 Failure: test_write_invalid_db 00:08:36.746 00:08:36.746 Executing: test_invalid_db_write_overflow_sq 00:08:36.746 Waiting for AER completion... 00:08:36.746 Failure: test_invalid_db_write_overflow_sq 00:08:36.746 00:08:36.746 Executing: test_invalid_db_write_overflow_cq 00:08:36.746 Waiting for AER completion... 00:08:36.746 Failure: test_invalid_db_write_overflow_cq 00:08:36.746 00:08:36.746 23:57:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:36.746 23:57:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:37.004 [2024-11-20 23:57:27.211719] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:47.062 Executing: test_write_invalid_db 00:08:47.062 Waiting for AER completion... 00:08:47.062 Failure: test_write_invalid_db 00:08:47.062 00:08:47.062 Executing: test_invalid_db_write_overflow_sq 00:08:47.062 Waiting for AER completion... 00:08:47.062 Failure: test_invalid_db_write_overflow_sq 00:08:47.062 00:08:47.062 Executing: test_invalid_db_write_overflow_cq 00:08:47.062 Waiting for AER completion... 00:08:47.062 Failure: test_invalid_db_write_overflow_cq 00:08:47.062 00:08:47.062 23:57:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:47.062 23:57:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:47.062 [2024-11-20 23:57:37.247577] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:57.065 Executing: test_write_invalid_db 00:08:57.065 Waiting for AER completion... 00:08:57.065 Failure: test_write_invalid_db 00:08:57.065 00:08:57.065 Executing: test_invalid_db_write_overflow_sq 00:08:57.065 Waiting for AER completion... 00:08:57.065 Failure: test_invalid_db_write_overflow_sq 00:08:57.065 00:08:57.065 Executing: test_invalid_db_write_overflow_cq 00:08:57.065 Waiting for AER completion... 00:08:57.065 Failure: test_invalid_db_write_overflow_cq 00:08:57.065 00:08:57.065 00:08:57.065 real 0m40.169s 00:08:57.065 user 0m34.141s 00:08:57.065 sys 0m5.665s 00:08:57.065 23:57:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.065 23:57:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:57.065 ************************************ 00:08:57.065 END TEST nvme_doorbell_aers 00:08:57.065 ************************************ 00:08:57.065 23:57:47 nvme -- nvme/nvme.sh@97 -- # uname 00:08:57.065 23:57:47 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:57.065 23:57:47 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:57.065 23:57:47 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:57.065 23:57:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.065 23:57:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.065 ************************************ 00:08:57.065 START TEST nvme_multi_aen 00:08:57.065 ************************************ 00:08:57.065 23:57:47 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:57.065 [2024-11-20 23:57:47.280289] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:57.065 [2024-11-20 23:57:47.280478] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:57.065 [2024-11-20 23:57:47.280546] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:57.065 [2024-11-20 23:57:47.281662] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:57.065 [2024-11-20 23:57:47.281758] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:57.065 [2024-11-20 23:57:47.281805] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:57.065 [2024-11-20 23:57:47.282733] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:57.065 [2024-11-20 23:57:47.282821] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:57.065 [2024-11-20 23:57:47.282873] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:57.065 [2024-11-20 23:57:47.283759] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:57.065 [2024-11-20 23:57:47.283845] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:57.065 [2024-11-20 23:57:47.283894] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75046) is not found. Dropping the request. 00:08:57.065 Child process pid: 75572 00:08:57.065 [Child] Asynchronous Event Request test 00:08:57.065 [Child] Attached to 0000:00:13.0 00:08:57.065 [Child] Attached to 0000:00:10.0 00:08:57.065 [Child] Attached to 0000:00:11.0 00:08:57.065 [Child] Attached to 0000:00:12.0 00:08:57.065 [Child] Registering asynchronous event callbacks... 00:08:57.065 [Child] Getting orig temperature thresholds of all controllers 00:08:57.065 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:57.065 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:57.065 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:57.065 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:57.065 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:57.065 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:57.065 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:57.065 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:57.065 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:57.065 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:57.065 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:57.065 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:57.065 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:57.065 [Child] Cleaning up... 00:08:57.323 Asynchronous Event Request test 00:08:57.323 Attached to 0000:00:13.0 00:08:57.323 Attached to 0000:00:10.0 00:08:57.323 Attached to 0000:00:11.0 00:08:57.323 Attached to 0000:00:12.0 00:08:57.323 Reset controller to setup AER completions for this process 00:08:57.323 Registering asynchronous event callbacks... 00:08:57.323 Getting orig temperature thresholds of all controllers 00:08:57.323 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:57.323 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:57.323 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:57.323 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:57.323 Setting all controllers temperature threshold low to trigger AER 00:08:57.323 Waiting for all controllers temperature threshold to be set lower 00:08:57.323 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:57.323 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:57.323 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:57.323 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:57.323 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:57.323 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:57.323 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:57.323 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:57.323 Waiting for all controllers to trigger AER and reset threshold 00:08:57.323 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:57.323 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:57.323 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:57.323 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:57.323 Cleaning up... 00:08:57.323 00:08:57.323 real 0m0.365s 00:08:57.323 user 0m0.101s 00:08:57.323 sys 0m0.154s 00:08:57.323 23:57:47 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.323 ************************************ 00:08:57.323 END TEST nvme_multi_aen 00:08:57.323 ************************************ 00:08:57.323 23:57:47 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:57.323 23:57:47 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:57.323 23:57:47 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:57.323 23:57:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.323 23:57:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.323 ************************************ 00:08:57.323 START TEST nvme_startup 00:08:57.323 ************************************ 00:08:57.323 23:57:47 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:57.323 Initializing NVMe Controllers 00:08:57.323 Attached to 0000:00:13.0 00:08:57.323 Attached to 0000:00:10.0 00:08:57.323 Attached to 0000:00:11.0 00:08:57.323 Attached to 0000:00:12.0 00:08:57.323 Initialization complete. 00:08:57.323 Time used:105784.773 (us). 00:08:57.323 00:08:57.323 real 0m0.152s 00:08:57.323 user 0m0.039s 00:08:57.323 sys 0m0.074s 00:08:57.323 23:57:47 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.323 23:57:47 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:57.323 ************************************ 00:08:57.323 END TEST nvme_startup 00:08:57.323 ************************************ 00:08:57.323 23:57:47 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:57.323 23:57:47 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.323 23:57:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.323 23:57:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.323 ************************************ 00:08:57.323 START TEST nvme_multi_secondary 00:08:57.323 ************************************ 00:08:57.323 23:57:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:57.323 23:57:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75617 00:08:57.323 23:57:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75618 00:08:57.323 23:57:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:57.323 23:57:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:57.323 23:57:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:00.705 Initializing NVMe Controllers 00:09:00.705 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:00.705 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:00.705 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:00.705 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:00.705 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:00.705 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:00.705 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:00.705 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:00.705 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:00.705 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:00.705 Initialization complete. Launching workers. 00:09:00.705 ======================================================== 00:09:00.705 Latency(us) 00:09:00.705 Device Information : IOPS MiB/s Average min max 00:09:00.705 PCIE (0000:00:13.0) NSID 1 from core 1: 6994.75 27.32 2286.94 833.93 6317.11 00:09:00.705 PCIE (0000:00:10.0) NSID 1 from core 1: 6994.75 27.32 2286.03 829.84 6563.70 00:09:00.705 PCIE (0000:00:11.0) NSID 1 from core 1: 6994.75 27.32 2287.00 834.42 6780.03 00:09:00.705 PCIE (0000:00:12.0) NSID 1 from core 1: 6994.75 27.32 2287.21 832.00 6176.37 00:09:00.705 PCIE (0000:00:12.0) NSID 2 from core 1: 6994.75 27.32 2287.24 833.77 5976.65 00:09:00.705 PCIE (0000:00:12.0) NSID 3 from core 1: 6994.75 27.32 2287.21 831.40 6110.25 00:09:00.705 ======================================================== 00:09:00.705 Total : 41968.50 163.94 2286.94 829.84 6780.03 00:09:00.705 00:09:00.705 Initializing NVMe Controllers 00:09:00.706 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:00.706 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:00.706 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:00.706 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:00.706 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:00.706 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:00.706 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:00.706 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:00.706 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:00.706 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:00.706 Initialization complete. Launching workers. 00:09:00.706 ======================================================== 00:09:00.706 Latency(us) 00:09:00.706 Device Information : IOPS MiB/s Average min max 00:09:00.706 PCIE (0000:00:13.0) NSID 1 from core 2: 2877.00 11.24 5560.86 1578.63 14106.39 00:09:00.706 PCIE (0000:00:10.0) NSID 1 from core 2: 2877.00 11.24 5560.23 1567.70 14201.48 00:09:00.706 PCIE (0000:00:11.0) NSID 1 from core 2: 2877.00 11.24 5560.77 1648.06 14056.19 00:09:00.706 PCIE (0000:00:12.0) NSID 1 from core 2: 2877.00 11.24 5561.10 1538.34 14492.04 00:09:00.706 PCIE (0000:00:12.0) NSID 2 from core 2: 2877.00 11.24 5562.15 1518.83 15010.64 00:09:00.706 PCIE (0000:00:12.0) NSID 3 from core 2: 2877.00 11.24 5570.02 1589.37 14217.37 00:09:00.706 ======================================================== 00:09:00.706 Total : 17262.01 67.43 5562.52 1518.83 15010.64 00:09:00.706 00:09:00.964 23:57:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75617 00:09:02.877 Initializing NVMe Controllers 00:09:02.877 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:02.877 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:02.877 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:02.877 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:02.877 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:02.877 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:02.877 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:02.877 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:02.877 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:02.877 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:02.877 Initialization complete. Launching workers. 00:09:02.877 ======================================================== 00:09:02.877 Latency(us) 00:09:02.877 Device Information : IOPS MiB/s Average min max 00:09:02.877 PCIE (0000:00:13.0) NSID 1 from core 0: 9707.25 37.92 1647.86 717.73 7946.23 00:09:02.877 PCIE (0000:00:10.0) NSID 1 from core 0: 9707.25 37.92 1646.91 708.53 8092.33 00:09:02.877 PCIE (0000:00:11.0) NSID 1 from core 0: 9707.25 37.92 1647.82 714.95 8068.92 00:09:02.877 PCIE (0000:00:12.0) NSID 1 from core 0: 9707.25 37.92 1647.80 547.51 7859.28 00:09:02.877 PCIE (0000:00:12.0) NSID 2 from core 0: 9707.25 37.92 1647.78 469.42 7868.27 00:09:02.877 PCIE (0000:00:12.0) NSID 3 from core 0: 9707.25 37.92 1647.78 387.11 7869.76 00:09:02.877 ======================================================== 00:09:02.877 Total : 58243.53 227.51 1647.66 387.11 8092.33 00:09:02.877 00:09:02.877 23:57:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75618 00:09:02.877 23:57:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75687 00:09:02.877 23:57:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:02.877 23:57:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75688 00:09:02.878 23:57:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:02.878 23:57:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:06.152 Initializing NVMe Controllers 00:09:06.152 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:06.152 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:06.152 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:06.152 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:06.152 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:06.152 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:06.152 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:06.152 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:06.152 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:06.152 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:06.152 Initialization complete. Launching workers. 00:09:06.152 ======================================================== 00:09:06.152 Latency(us) 00:09:06.152 Device Information : IOPS MiB/s Average min max 00:09:06.152 PCIE (0000:00:13.0) NSID 1 from core 1: 6754.43 26.38 2368.36 864.18 7638.85 00:09:06.152 PCIE (0000:00:10.0) NSID 1 from core 1: 6754.43 26.38 2367.50 831.63 6819.71 00:09:06.152 PCIE (0000:00:11.0) NSID 1 from core 1: 6754.43 26.38 2368.65 850.86 6654.00 00:09:06.152 PCIE (0000:00:12.0) NSID 1 from core 1: 6754.43 26.38 2368.78 836.26 7213.36 00:09:06.152 PCIE (0000:00:12.0) NSID 2 from core 1: 6754.43 26.38 2368.94 856.98 7384.23 00:09:06.152 PCIE (0000:00:12.0) NSID 3 from core 1: 6754.43 26.38 2368.88 871.01 7136.23 00:09:06.152 ======================================================== 00:09:06.152 Total : 40526.56 158.31 2368.52 831.63 7638.85 00:09:06.152 00:09:06.152 Initializing NVMe Controllers 00:09:06.152 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:06.152 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:06.152 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:06.152 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:06.152 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:06.152 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:06.152 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:06.152 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:06.152 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:06.152 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:06.152 Initialization complete. Launching workers. 00:09:06.152 ======================================================== 00:09:06.152 Latency(us) 00:09:06.152 Device Information : IOPS MiB/s Average min max 00:09:06.152 PCIE (0000:00:13.0) NSID 1 from core 0: 6668.78 26.05 2398.76 923.09 7934.21 00:09:06.152 PCIE (0000:00:10.0) NSID 1 from core 0: 6668.78 26.05 2397.84 900.88 8406.77 00:09:06.152 PCIE (0000:00:11.0) NSID 1 from core 0: 6668.78 26.05 2398.74 941.37 8599.78 00:09:06.152 PCIE (0000:00:12.0) NSID 1 from core 0: 6668.78 26.05 2398.84 923.71 8548.67 00:09:06.152 PCIE (0000:00:12.0) NSID 2 from core 0: 6668.78 26.05 2398.98 899.88 7921.57 00:09:06.152 PCIE (0000:00:12.0) NSID 3 from core 0: 6668.78 26.05 2399.05 907.45 8177.36 00:09:06.152 ======================================================== 00:09:06.152 Total : 40012.67 156.30 2398.70 899.88 8599.78 00:09:06.152 00:09:08.050 Initializing NVMe Controllers 00:09:08.050 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:08.050 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:08.050 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:08.050 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:08.050 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:08.050 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:08.050 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:08.050 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:08.050 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:08.050 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:08.050 Initialization complete. Launching workers. 00:09:08.050 ======================================================== 00:09:08.050 Latency(us) 00:09:08.050 Device Information : IOPS MiB/s Average min max 00:09:08.050 PCIE (0000:00:13.0) NSID 1 from core 2: 4227.07 16.51 3784.73 825.82 16096.24 00:09:08.050 PCIE (0000:00:10.0) NSID 1 from core 2: 4227.07 16.51 3782.80 807.80 14653.35 00:09:08.050 PCIE (0000:00:11.0) NSID 1 from core 2: 4227.07 16.51 3784.43 730.21 17613.71 00:09:08.050 PCIE (0000:00:12.0) NSID 1 from core 2: 4227.07 16.51 3784.53 703.08 18481.40 00:09:08.050 PCIE (0000:00:12.0) NSID 2 from core 2: 4227.07 16.51 3783.86 606.33 18829.52 00:09:08.050 PCIE (0000:00:12.0) NSID 3 from core 2: 4227.07 16.51 3781.09 493.61 16474.10 00:09:08.050 ======================================================== 00:09:08.050 Total : 25362.41 99.07 3783.58 493.61 18829.52 00:09:08.050 00:09:08.050 ************************************ 00:09:08.050 END TEST nvme_multi_secondary 00:09:08.050 ************************************ 00:09:08.050 23:57:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75687 00:09:08.050 23:57:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75688 00:09:08.050 00:09:08.050 real 0m10.544s 00:09:08.050 user 0m18.263s 00:09:08.050 sys 0m0.481s 00:09:08.050 23:57:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.050 23:57:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:08.050 23:57:58 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:08.050 23:57:58 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:08.050 23:57:58 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/74660 ]] 00:09:08.050 23:57:58 nvme -- common/autotest_common.sh@1090 -- # kill 74660 00:09:08.050 23:57:58 nvme -- common/autotest_common.sh@1091 -- # wait 74660 00:09:08.050 [2024-11-20 23:57:58.307563] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.307699] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.307741] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.307806] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.308626] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.308714] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.308749] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.308783] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.309527] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.309788] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.309826] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.309863] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.311199] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.311419] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.311506] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 [2024-11-20 23:57:58.311588] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:09:08.050 23:57:58 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:08.050 23:57:58 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:08.050 23:57:58 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:08.050 23:57:58 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.050 23:57:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.050 23:57:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:08.050 ************************************ 00:09:08.050 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:08.050 ************************************ 00:09:08.050 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:08.050 * Looking for test storage... 00:09:08.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:08.050 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:08.050 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:08.050 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:08.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.309 --rc genhtml_branch_coverage=1 00:09:08.309 --rc genhtml_function_coverage=1 00:09:08.309 --rc genhtml_legend=1 00:09:08.309 --rc geninfo_all_blocks=1 00:09:08.309 --rc geninfo_unexecuted_blocks=1 00:09:08.309 00:09:08.309 ' 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:08.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.309 --rc genhtml_branch_coverage=1 00:09:08.309 --rc genhtml_function_coverage=1 00:09:08.309 --rc genhtml_legend=1 00:09:08.309 --rc geninfo_all_blocks=1 00:09:08.309 --rc geninfo_unexecuted_blocks=1 00:09:08.309 00:09:08.309 ' 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:08.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.309 --rc genhtml_branch_coverage=1 00:09:08.309 --rc genhtml_function_coverage=1 00:09:08.309 --rc genhtml_legend=1 00:09:08.309 --rc geninfo_all_blocks=1 00:09:08.309 --rc geninfo_unexecuted_blocks=1 00:09:08.309 00:09:08.309 ' 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:08.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.309 --rc genhtml_branch_coverage=1 00:09:08.309 --rc genhtml_function_coverage=1 00:09:08.309 --rc genhtml_legend=1 00:09:08.309 --rc geninfo_all_blocks=1 00:09:08.309 --rc geninfo_unexecuted_blocks=1 00:09:08.309 00:09:08.309 ' 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:08.309 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:08.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75849 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75849 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 75849 ']' 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:08.310 23:57:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:08.310 [2024-11-20 23:57:58.647607] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:08.310 [2024-11-20 23:57:58.647723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75849 ] 00:09:08.568 [2024-11-20 23:57:58.786253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.568 [2024-11-20 23:57:58.819555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.568 [2024-11-20 23:57:58.819827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.568 [2024-11-20 23:57:58.820014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.568 [2024-11-20 23:57:58.820102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.133 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.133 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:09.133 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:09.133 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.133 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:09.133 nvme0n1 00:09:09.133 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.133 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:09.133 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_8tOMe.txt 00:09:09.133 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:09.133 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.133 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:09.391 true 00:09:09.391 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.391 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:09.391 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732147079 00:09:09.391 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75872 00:09:09.391 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:09.391 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:09.391 23:57:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:11.300 [2024-11-20 23:58:01.566951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:11.300 [2024-11-20 23:58:01.567155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:11.300 [2024-11-20 23:58:01.567172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:11.300 [2024-11-20 23:58:01.567193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.300 [2024-11-20 23:58:01.568881] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:11.300 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75872 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75872 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75872 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_8tOMe.txt 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_8tOMe.txt 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75849 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 75849 ']' 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 75849 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75849 00:09:11.300 killing process with pid 75849 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75849' 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 75849 00:09:11.300 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 75849 00:09:11.561 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:11.561 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:11.561 ************************************ 00:09:11.561 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:11.561 ************************************ 00:09:11.561 00:09:11.561 real 0m3.529s 00:09:11.561 user 0m12.746s 00:09:11.561 sys 0m0.435s 00:09:11.561 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.561 23:58:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:11.561 23:58:01 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:11.561 23:58:01 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:11.561 23:58:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.561 23:58:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.561 23:58:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:11.561 ************************************ 00:09:11.561 START TEST nvme_fio 00:09:11.561 ************************************ 00:09:11.561 23:58:01 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:11.561 23:58:01 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:11.561 23:58:01 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:11.561 23:58:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:11.561 23:58:01 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:11.561 23:58:01 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:11.561 23:58:01 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:11.561 23:58:01 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:11.561 23:58:01 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:11.822 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:11.822 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:11.822 23:58:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:11.822 23:58:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:11.822 23:58:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:11.822 23:58:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:11.822 23:58:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:11.822 23:58:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:11.822 23:58:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:12.084 23:58:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:12.084 23:58:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:12.084 23:58:02 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:12.344 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:12.344 fio-3.35 00:09:12.344 Starting 1 thread 00:09:17.656 00:09:17.656 test: (groupid=0, jobs=1): err= 0: pid=75995: Wed Nov 20 23:58:07 2024 00:09:17.656 read: IOPS=23.8k, BW=93.0MiB/s (97.5MB/s)(186MiB/2001msec) 00:09:17.656 slat (nsec): min=3380, max=46568, avg=4862.31, stdev=1765.14 00:09:17.656 clat (usec): min=233, max=7803, avg=2685.49, stdev=793.31 00:09:17.656 lat (usec): min=238, max=7815, avg=2690.35, stdev=794.32 00:09:17.656 clat percentiles (usec): 00:09:17.656 | 1.00th=[ 1844], 5.00th=[ 2073], 10.00th=[ 2147], 20.00th=[ 2245], 00:09:17.656 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:09:17.656 | 70.00th=[ 2638], 80.00th=[ 2835], 90.00th=[ 3425], 95.00th=[ 4359], 00:09:17.656 | 99.00th=[ 6194], 99.50th=[ 6718], 99.90th=[ 7308], 99.95th=[ 7504], 00:09:17.656 | 99.99th=[ 7767] 00:09:17.656 bw ( KiB/s): min=91160, max=93040, per=96.92%, avg=92264.00, stdev=981.98, samples=3 00:09:17.656 iops : min=22790, max=23260, avg=23066.00, stdev=245.50, samples=3 00:09:17.656 write: IOPS=23.6k, BW=92.4MiB/s (96.8MB/s)(185MiB/2001msec); 0 zone resets 00:09:17.656 slat (nsec): min=3528, max=67967, avg=5144.30, stdev=1897.64 00:09:17.656 clat (usec): min=202, max=7923, avg=2694.97, stdev=793.71 00:09:17.656 lat (usec): min=207, max=7936, avg=2700.12, stdev=794.74 00:09:17.656 clat percentiles (usec): 00:09:17.656 | 1.00th=[ 1909], 5.00th=[ 2089], 10.00th=[ 2180], 20.00th=[ 2278], 00:09:17.656 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2573], 00:09:17.656 | 70.00th=[ 2671], 80.00th=[ 2835], 90.00th=[ 3425], 95.00th=[ 4359], 00:09:17.656 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 7439], 99.95th=[ 7635], 00:09:17.656 | 99.99th=[ 7767] 00:09:17.656 bw ( KiB/s): min=90000, max=94712, per=97.66%, avg=92362.67, stdev=2356.03, samples=3 00:09:17.656 iops : min=22500, max=23678, avg=23090.67, stdev=589.01, samples=3 00:09:17.656 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:17.656 lat (msec) : 2=2.18%, 4=91.33%, 10=6.44% 00:09:17.656 cpu : usr=99.15%, sys=0.15%, ctx=7, majf=0, minf=625 00:09:17.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:17.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:17.656 issued rwts: total=47621,47311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:17.656 00:09:17.656 Run status group 0 (all jobs): 00:09:17.656 READ: bw=93.0MiB/s (97.5MB/s), 93.0MiB/s-93.0MiB/s (97.5MB/s-97.5MB/s), io=186MiB (195MB), run=2001-2001msec 00:09:17.656 WRITE: bw=92.4MiB/s (96.8MB/s), 92.4MiB/s-92.4MiB/s (96.8MB/s-96.8MB/s), io=185MiB (194MB), run=2001-2001msec 00:09:17.656 ----------------------------------------------------- 00:09:17.656 Suppressions used: 00:09:17.656 count bytes template 00:09:17.656 1 32 /usr/src/fio/parse.c 00:09:17.656 1 8 libtcmalloc_minimal.so 00:09:17.656 ----------------------------------------------------- 00:09:17.656 00:09:17.656 23:58:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:17.656 23:58:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:17.656 23:58:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:17.656 23:58:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:17.656 23:58:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:17.656 23:58:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:17.916 23:58:08 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:17.916 23:58:08 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:17.916 23:58:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:18.177 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:18.177 fio-3.35 00:09:18.177 Starting 1 thread 00:09:24.760 00:09:24.760 test: (groupid=0, jobs=1): err= 0: pid=76051: Wed Nov 20 23:58:14 2024 00:09:24.760 read: IOPS=23.8k, BW=93.0MiB/s (97.5MB/s)(186MiB/2001msec) 00:09:24.760 slat (usec): min=3, max=108, avg= 4.98, stdev= 2.33 00:09:24.760 clat (usec): min=271, max=10954, avg=2684.67, stdev=885.88 00:09:24.760 lat (usec): min=275, max=10994, avg=2689.65, stdev=887.40 00:09:24.760 clat percentiles (usec): 00:09:24.760 | 1.00th=[ 1876], 5.00th=[ 2245], 10.00th=[ 2278], 20.00th=[ 2343], 00:09:24.760 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2409], 60.00th=[ 2474], 00:09:24.760 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 3228], 95.00th=[ 4948], 00:09:24.760 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 8291], 99.95th=[ 8848], 00:09:24.760 | 99.99th=[10814] 00:09:24.760 bw ( KiB/s): min=92128, max=98992, per=100.00%, avg=96640.00, stdev=3908.69, samples=3 00:09:24.760 iops : min=23032, max=24748, avg=24160.00, stdev=977.17, samples=3 00:09:24.760 write: IOPS=23.7k, BW=92.4MiB/s (96.9MB/s)(185MiB/2001msec); 0 zone resets 00:09:24.760 slat (nsec): min=3417, max=61561, avg=5256.10, stdev=2357.77 00:09:24.760 clat (usec): min=237, max=10855, avg=2689.13, stdev=894.76 00:09:24.760 lat (usec): min=242, max=10868, avg=2694.39, stdev=896.35 00:09:24.760 clat percentiles (usec): 00:09:24.760 | 1.00th=[ 1893], 5.00th=[ 2245], 10.00th=[ 2278], 20.00th=[ 2343], 00:09:24.760 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:24.760 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 3195], 95.00th=[ 5014], 00:09:24.760 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 8291], 99.95th=[ 9372], 00:09:24.760 | 99.99th=[10552] 00:09:24.760 bw ( KiB/s): min=91960, max=100288, per=100.00%, avg=96770.67, stdev=4312.01, samples=3 00:09:24.760 iops : min=22990, max=25072, avg=24192.67, stdev=1078.00, samples=3 00:09:24.760 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:24.760 lat (msec) : 2=1.45%, 4=91.49%, 10=6.99%, 20=0.03% 00:09:24.760 cpu : usr=99.10%, sys=0.20%, ctx=26, majf=0, minf=625 00:09:24.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:24.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.760 issued rwts: total=47648,47337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.760 00:09:24.760 Run status group 0 (all jobs): 00:09:24.760 READ: bw=93.0MiB/s (97.5MB/s), 93.0MiB/s-93.0MiB/s (97.5MB/s-97.5MB/s), io=186MiB (195MB), run=2001-2001msec 00:09:24.761 WRITE: bw=92.4MiB/s (96.9MB/s), 92.4MiB/s-92.4MiB/s (96.9MB/s-96.9MB/s), io=185MiB (194MB), run=2001-2001msec 00:09:24.761 ----------------------------------------------------- 00:09:24.761 Suppressions used: 00:09:24.761 count bytes template 00:09:24.761 1 32 /usr/src/fio/parse.c 00:09:24.761 1 8 libtcmalloc_minimal.so 00:09:24.761 ----------------------------------------------------- 00:09:24.761 00:09:24.761 23:58:15 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:24.761 23:58:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:24.761 23:58:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:24.761 23:58:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:25.020 23:58:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:25.020 23:58:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:25.020 23:58:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:25.020 23:58:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:25.020 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:25.020 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:25.020 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:25.020 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:25.020 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:25.020 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:25.020 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:25.020 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:25.020 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:25.020 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:25.020 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:25.279 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:25.279 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:25.279 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:25.279 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:25.279 23:58:15 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:25.279 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:25.279 fio-3.35 00:09:25.279 Starting 1 thread 00:09:33.395 00:09:33.395 test: (groupid=0, jobs=1): err= 0: pid=76106: Wed Nov 20 23:58:22 2024 00:09:33.395 read: IOPS=25.2k, BW=98.4MiB/s (103MB/s)(197MiB/2001msec) 00:09:33.395 slat (nsec): min=4236, max=57168, avg=4800.07, stdev=1733.15 00:09:33.395 clat (usec): min=300, max=8003, avg=2535.93, stdev=629.94 00:09:33.395 lat (usec): min=305, max=8060, avg=2540.73, stdev=631.00 00:09:33.395 clat percentiles (usec): 00:09:33.395 | 1.00th=[ 1549], 5.00th=[ 2040], 10.00th=[ 2245], 20.00th=[ 2311], 00:09:33.395 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2409], 00:09:33.395 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 2835], 95.00th=[ 3687], 00:09:33.395 | 99.00th=[ 5604], 99.50th=[ 6194], 99.90th=[ 6456], 99.95th=[ 6587], 00:09:33.395 | 99.99th=[ 7570] 00:09:33.395 bw ( KiB/s): min=99120, max=101168, per=99.60%, avg=100365.33, stdev=1093.41, samples=3 00:09:33.395 iops : min=24780, max=25292, avg=25091.33, stdev=273.35, samples=3 00:09:33.395 write: IOPS=25.1k, BW=97.9MiB/s (103MB/s)(196MiB/2001msec); 0 zone resets 00:09:33.395 slat (nsec): min=4334, max=91300, avg=5052.70, stdev=1819.68 00:09:33.395 clat (usec): min=203, max=7834, avg=2540.56, stdev=638.48 00:09:33.395 lat (usec): min=208, max=7848, avg=2545.61, stdev=639.54 00:09:33.395 clat percentiles (usec): 00:09:33.395 | 1.00th=[ 1549], 5.00th=[ 2040], 10.00th=[ 2245], 20.00th=[ 2311], 00:09:33.395 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:33.396 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 2868], 95.00th=[ 3720], 00:09:33.396 | 99.00th=[ 5604], 99.50th=[ 6194], 99.90th=[ 6456], 99.95th=[ 6652], 00:09:33.396 | 99.99th=[ 7439] 00:09:33.396 bw ( KiB/s): min=98912, max=101224, per=100.00%, avg=100392.00, stdev=1285.02, samples=3 00:09:33.396 iops : min=24728, max=25306, avg=25098.00, stdev=321.25, samples=3 00:09:33.396 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.09% 00:09:33.396 lat (msec) : 2=4.40%, 4=91.33%, 10=4.13% 00:09:33.396 cpu : usr=99.30%, sys=0.10%, ctx=5, majf=0, minf=626 00:09:33.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:33.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.396 issued rwts: total=50408,50154,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.396 00:09:33.396 Run status group 0 (all jobs): 00:09:33.396 READ: bw=98.4MiB/s (103MB/s), 98.4MiB/s-98.4MiB/s (103MB/s-103MB/s), io=197MiB (206MB), run=2001-2001msec 00:09:33.396 WRITE: bw=97.9MiB/s (103MB/s), 97.9MiB/s-97.9MiB/s (103MB/s-103MB/s), io=196MiB (205MB), run=2001-2001msec 00:09:33.396 ----------------------------------------------------- 00:09:33.396 Suppressions used: 00:09:33.396 count bytes template 00:09:33.396 1 32 /usr/src/fio/parse.c 00:09:33.396 1 8 libtcmalloc_minimal.so 00:09:33.396 ----------------------------------------------------- 00:09:33.396 00:09:33.396 23:58:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:33.396 23:58:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:33.396 23:58:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:33.396 23:58:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:33.396 23:58:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:33.396 23:58:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:33.396 23:58:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:33.396 23:58:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:33.396 23:58:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:33.396 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:33.396 fio-3.35 00:09:33.396 Starting 1 thread 00:09:39.977 00:09:39.977 test: (groupid=0, jobs=1): err= 0: pid=76161: Wed Nov 20 23:58:29 2024 00:09:39.977 read: IOPS=25.1k, BW=97.9MiB/s (103MB/s)(196MiB/2001msec) 00:09:39.977 slat (nsec): min=4214, max=52266, avg=4818.41, stdev=1909.19 00:09:39.977 clat (usec): min=220, max=12099, avg=2548.23, stdev=744.26 00:09:39.977 lat (usec): min=225, max=12151, avg=2553.05, stdev=745.51 00:09:39.977 clat percentiles (usec): 00:09:39.977 | 1.00th=[ 1745], 5.00th=[ 2212], 10.00th=[ 2245], 20.00th=[ 2311], 00:09:39.977 | 30.00th=[ 2343], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:09:39.977 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2704], 95.00th=[ 3916], 00:09:39.977 | 99.00th=[ 6390], 99.50th=[ 6587], 99.90th=[ 7177], 99.95th=[ 8848], 00:09:39.977 | 99.99th=[11731] 00:09:39.977 bw ( KiB/s): min=91304, max=105512, per=98.32%, avg=98541.33, stdev=7107.75, samples=3 00:09:39.977 iops : min=22826, max=26378, avg=24635.33, stdev=1776.94, samples=3 00:09:39.977 write: IOPS=24.9k, BW=97.3MiB/s (102MB/s)(195MiB/2001msec); 0 zone resets 00:09:39.977 slat (nsec): min=4266, max=81640, avg=5081.57, stdev=1960.51 00:09:39.977 clat (usec): min=203, max=11960, avg=2557.78, stdev=755.31 00:09:39.977 lat (usec): min=208, max=11973, avg=2562.86, stdev=756.60 00:09:39.977 clat percentiles (usec): 00:09:39.977 | 1.00th=[ 1795], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2311], 00:09:39.977 | 30.00th=[ 2343], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:09:39.977 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2704], 95.00th=[ 4015], 00:09:39.977 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 7308], 99.95th=[ 9372], 00:09:39.977 | 99.99th=[11600] 00:09:39.977 bw ( KiB/s): min=90976, max=106520, per=98.95%, avg=98578.67, stdev=7777.53, samples=3 00:09:39.977 iops : min=22744, max=26630, avg=24644.67, stdev=1944.38, samples=3 00:09:39.977 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:39.977 lat (msec) : 2=1.99%, 4=93.04%, 10=4.89%, 20=0.04% 00:09:39.977 cpu : usr=99.40%, sys=0.00%, ctx=6, majf=0, minf=625 00:09:39.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:39.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.977 issued rwts: total=50138,49838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.977 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.977 00:09:39.977 Run status group 0 (all jobs): 00:09:39.977 READ: bw=97.9MiB/s (103MB/s), 97.9MiB/s-97.9MiB/s (103MB/s-103MB/s), io=196MiB (205MB), run=2001-2001msec 00:09:39.977 WRITE: bw=97.3MiB/s (102MB/s), 97.3MiB/s-97.3MiB/s (102MB/s-102MB/s), io=195MiB (204MB), run=2001-2001msec 00:09:39.977 ----------------------------------------------------- 00:09:39.977 Suppressions used: 00:09:39.977 count bytes template 00:09:39.977 1 32 /usr/src/fio/parse.c 00:09:39.977 1 8 libtcmalloc_minimal.so 00:09:39.977 ----------------------------------------------------- 00:09:39.977 00:09:39.977 23:58:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:39.977 23:58:29 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:39.977 00:09:39.977 real 0m27.532s 00:09:39.977 user 0m16.226s 00:09:39.977 sys 0m21.036s 00:09:39.977 23:58:29 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.977 23:58:29 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:39.977 ************************************ 00:09:39.977 END TEST nvme_fio 00:09:39.977 ************************************ 00:09:39.977 ************************************ 00:09:39.977 END TEST nvme 00:09:39.977 ************************************ 00:09:39.977 00:09:39.977 real 1m34.518s 00:09:39.977 user 3m30.238s 00:09:39.977 sys 0m30.885s 00:09:39.977 23:58:29 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.977 23:58:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:39.977 23:58:29 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:39.977 23:58:29 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:39.977 23:58:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.977 23:58:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.977 23:58:29 -- common/autotest_common.sh@10 -- # set +x 00:09:39.977 ************************************ 00:09:39.977 START TEST nvme_scc 00:09:39.977 ************************************ 00:09:39.977 23:58:29 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:39.977 * Looking for test storage... 00:09:39.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:39.977 23:58:29 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:39.977 23:58:29 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:39.977 23:58:29 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:39.977 23:58:29 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.977 23:58:29 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:39.977 23:58:29 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.977 23:58:29 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:39.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.977 --rc genhtml_branch_coverage=1 00:09:39.977 --rc genhtml_function_coverage=1 00:09:39.977 --rc genhtml_legend=1 00:09:39.977 --rc geninfo_all_blocks=1 00:09:39.977 --rc geninfo_unexecuted_blocks=1 00:09:39.977 00:09:39.977 ' 00:09:39.977 23:58:29 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:39.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.977 --rc genhtml_branch_coverage=1 00:09:39.977 --rc genhtml_function_coverage=1 00:09:39.977 --rc genhtml_legend=1 00:09:39.977 --rc geninfo_all_blocks=1 00:09:39.977 --rc geninfo_unexecuted_blocks=1 00:09:39.977 00:09:39.977 ' 00:09:39.977 23:58:29 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:39.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.977 --rc genhtml_branch_coverage=1 00:09:39.977 --rc genhtml_function_coverage=1 00:09:39.977 --rc genhtml_legend=1 00:09:39.977 --rc geninfo_all_blocks=1 00:09:39.977 --rc geninfo_unexecuted_blocks=1 00:09:39.977 00:09:39.977 ' 00:09:39.977 23:58:29 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:39.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.977 --rc genhtml_branch_coverage=1 00:09:39.978 --rc genhtml_function_coverage=1 00:09:39.978 --rc genhtml_legend=1 00:09:39.978 --rc geninfo_all_blocks=1 00:09:39.978 --rc geninfo_unexecuted_blocks=1 00:09:39.978 00:09:39.978 ' 00:09:39.978 23:58:29 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:39.978 23:58:29 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:39.978 23:58:29 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:39.978 23:58:29 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:39.978 23:58:29 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:39.978 23:58:29 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.978 23:58:29 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.978 23:58:29 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.978 23:58:29 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.978 23:58:29 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.978 23:58:29 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.978 23:58:29 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.978 23:58:29 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:39.978 23:58:29 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.978 23:58:29 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:39.978 23:58:29 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:39.978 23:58:29 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:39.978 23:58:29 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:39.978 23:58:29 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:39.978 23:58:29 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:39.978 23:58:29 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:39.978 23:58:29 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:39.978 23:58:29 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:39.978 23:58:29 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.978 23:58:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:39.978 23:58:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:39.978 23:58:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:39.978 23:58:29 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:39.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.978 Waiting for block devices as requested 00:09:39.978 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.978 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.978 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.978 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.274 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:45.274 23:58:35 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:45.274 23:58:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:45.274 23:58:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:45.274 23:58:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.274 23:58:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:45.274 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:45.275 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.276 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.277 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.278 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:45.279 23:58:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:45.279 23:58:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:45.279 23:58:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.279 23:58:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:45.279 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:45.280 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.281 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.282 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:45.283 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:45.284 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:45.285 23:58:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:45.285 23:58:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:45.285 23:58:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.285 23:58:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:45.285 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.286 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.287 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:45.288 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:45.289 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:45.290 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.291 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.292 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.556 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:45.557 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:45.558 23:58:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:45.558 23:58:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:45.558 23:58:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.558 23:58:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:45.558 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.559 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.560 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:45.561 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:45.562 23:58:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:45.562 23:58:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:45.563 23:58:35 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:45.563 23:58:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:45.563 23:58:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:45.563 23:58:35 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:46.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.708 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.708 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.708 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.708 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.708 23:58:36 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:46.708 23:58:36 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:46.708 23:58:36 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.708 23:58:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:46.708 ************************************ 00:09:46.708 START TEST nvme_simple_copy 00:09:46.708 ************************************ 00:09:46.708 23:58:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:46.969 Initializing NVMe Controllers 00:09:46.969 Attaching to 0000:00:10.0 00:09:46.969 Controller supports SCC. Attached to 0000:00:10.0 00:09:46.969 Namespace ID: 1 size: 6GB 00:09:46.969 Initialization complete. 00:09:46.969 00:09:46.969 Controller QEMU NVMe Ctrl (12340 ) 00:09:46.969 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:46.969 Namespace Block Size:4096 00:09:46.969 Writing LBAs 0 to 63 with Random Data 00:09:46.969 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:46.969 LBAs matching Written Data: 64 00:09:46.969 00:09:46.969 real 0m0.251s 00:09:46.969 user 0m0.093s 00:09:46.969 sys 0m0.056s 00:09:46.969 ************************************ 00:09:46.969 END TEST nvme_simple_copy 00:09:46.969 ************************************ 00:09:46.969 23:58:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.969 23:58:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:46.969 00:09:46.969 real 0m7.719s 00:09:46.969 user 0m1.066s 00:09:46.969 sys 0m1.386s 00:09:46.969 ************************************ 00:09:46.969 END TEST nvme_scc 00:09:46.969 ************************************ 00:09:46.969 23:58:37 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.969 23:58:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:46.969 23:58:37 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:46.969 23:58:37 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:46.969 23:58:37 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:46.969 23:58:37 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:46.969 23:58:37 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:46.969 23:58:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:46.969 23:58:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.969 23:58:37 -- common/autotest_common.sh@10 -- # set +x 00:09:46.969 ************************************ 00:09:46.969 START TEST nvme_fdp 00:09:46.969 ************************************ 00:09:46.969 23:58:37 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:47.232 * Looking for test storage... 00:09:47.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:47.232 23:58:37 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:47.232 23:58:37 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:47.232 23:58:37 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:47.232 23:58:37 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:47.232 23:58:37 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.232 23:58:37 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:47.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.232 --rc genhtml_branch_coverage=1 00:09:47.232 --rc genhtml_function_coverage=1 00:09:47.232 --rc genhtml_legend=1 00:09:47.232 --rc geninfo_all_blocks=1 00:09:47.232 --rc geninfo_unexecuted_blocks=1 00:09:47.232 00:09:47.232 ' 00:09:47.232 23:58:37 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:47.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.232 --rc genhtml_branch_coverage=1 00:09:47.232 --rc genhtml_function_coverage=1 00:09:47.232 --rc genhtml_legend=1 00:09:47.232 --rc geninfo_all_blocks=1 00:09:47.232 --rc geninfo_unexecuted_blocks=1 00:09:47.232 00:09:47.232 ' 00:09:47.232 23:58:37 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:47.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.232 --rc genhtml_branch_coverage=1 00:09:47.232 --rc genhtml_function_coverage=1 00:09:47.232 --rc genhtml_legend=1 00:09:47.232 --rc geninfo_all_blocks=1 00:09:47.232 --rc geninfo_unexecuted_blocks=1 00:09:47.232 00:09:47.232 ' 00:09:47.232 23:58:37 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:47.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.232 --rc genhtml_branch_coverage=1 00:09:47.232 --rc genhtml_function_coverage=1 00:09:47.232 --rc genhtml_legend=1 00:09:47.232 --rc geninfo_all_blocks=1 00:09:47.232 --rc geninfo_unexecuted_blocks=1 00:09:47.232 00:09:47.232 ' 00:09:47.232 23:58:37 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:47.232 23:58:37 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:47.232 23:58:37 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:47.232 23:58:37 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:47.232 23:58:37 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.232 23:58:37 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.232 23:58:37 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.232 23:58:37 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.232 23:58:37 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.232 23:58:37 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:47.232 23:58:37 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.232 23:58:37 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:47.232 23:58:37 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:47.232 23:58:37 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:47.232 23:58:37 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:47.232 23:58:37 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:47.232 23:58:37 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:47.232 23:58:37 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:47.232 23:58:37 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:47.232 23:58:37 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:47.232 23:58:37 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:47.232 23:58:37 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:47.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.754 Waiting for block devices as requested 00:09:47.754 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.754 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.015 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.015 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.312 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:53.312 23:58:43 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:53.312 23:58:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:53.312 23:58:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:53.312 23:58:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.312 23:58:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.312 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.313 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:53.314 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:53.315 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:53.316 23:58:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:53.316 23:58:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:53.316 23:58:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.316 23:58:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:53.316 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:53.317 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:53.318 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:53.319 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:53.320 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:53.321 23:58:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:53.321 23:58:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:53.321 23:58:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.321 23:58:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:53.321 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.322 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.323 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.324 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.325 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:53.326 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:53.327 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:53.328 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:53.329 23:58:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:53.329 23:58:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:53.329 23:58:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.329 23:58:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.329 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:53.330 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.331 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:53.332 23:58:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:53.332 23:58:43 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:53.332 23:58:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:53.332 23:58:43 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:53.332 23:58:43 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:53.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:54.473 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.473 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.473 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.473 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.473 23:58:44 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:54.473 23:58:44 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:54.473 23:58:44 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.473 23:58:44 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:54.473 ************************************ 00:09:54.473 START TEST nvme_flexible_data_placement 00:09:54.473 ************************************ 00:09:54.473 23:58:44 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:54.734 Initializing NVMe Controllers 00:09:54.734 Attaching to 0000:00:13.0 00:09:54.734 Controller supports FDP Attached to 0000:00:13.0 00:09:54.734 Namespace ID: 1 Endurance Group ID: 1 00:09:54.734 Initialization complete. 00:09:54.734 00:09:54.734 ================================== 00:09:54.734 == FDP tests for Namespace: #01 == 00:09:54.734 ================================== 00:09:54.734 00:09:54.734 Get Feature: FDP: 00:09:54.734 ================= 00:09:54.734 Enabled: Yes 00:09:54.734 FDP configuration Index: 0 00:09:54.734 00:09:54.734 FDP configurations log page 00:09:54.734 =========================== 00:09:54.734 Number of FDP configurations: 1 00:09:54.734 Version: 0 00:09:54.734 Size: 112 00:09:54.734 FDP Configuration Descriptor: 0 00:09:54.735 Descriptor Size: 96 00:09:54.735 Reclaim Group Identifier format: 2 00:09:54.735 FDP Volatile Write Cache: Not Present 00:09:54.735 FDP Configuration: Valid 00:09:54.735 Vendor Specific Size: 0 00:09:54.735 Number of Reclaim Groups: 2 00:09:54.735 Number of Recalim Unit Handles: 8 00:09:54.735 Max Placement Identifiers: 128 00:09:54.735 Number of Namespaces Suppprted: 256 00:09:54.735 Reclaim unit Nominal Size: 6000000 bytes 00:09:54.735 Estimated Reclaim Unit Time Limit: Not Reported 00:09:54.735 RUH Desc #000: RUH Type: Initially Isolated 00:09:54.735 RUH Desc #001: RUH Type: Initially Isolated 00:09:54.735 RUH Desc #002: RUH Type: Initially Isolated 00:09:54.735 RUH Desc #003: RUH Type: Initially Isolated 00:09:54.735 RUH Desc #004: RUH Type: Initially Isolated 00:09:54.735 RUH Desc #005: RUH Type: Initially Isolated 00:09:54.735 RUH Desc #006: RUH Type: Initially Isolated 00:09:54.735 RUH Desc #007: RUH Type: Initially Isolated 00:09:54.735 00:09:54.735 FDP reclaim unit handle usage log page 00:09:54.735 ====================================== 00:09:54.735 Number of Reclaim Unit Handles: 8 00:09:54.735 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:54.735 RUH Usage Desc #001: RUH Attributes: Unused 00:09:54.735 RUH Usage Desc #002: RUH Attributes: Unused 00:09:54.735 RUH Usage Desc #003: RUH Attributes: Unused 00:09:54.735 RUH Usage Desc #004: RUH Attributes: Unused 00:09:54.735 RUH Usage Desc #005: RUH Attributes: Unused 00:09:54.735 RUH Usage Desc #006: RUH Attributes: Unused 00:09:54.735 RUH Usage Desc #007: RUH Attributes: Unused 00:09:54.735 00:09:54.735 FDP statistics log page 00:09:54.735 ======================= 00:09:54.735 Host bytes with metadata written: 2068746240 00:09:54.735 Media bytes with metadata written: 2069065728 00:09:54.735 Media bytes erased: 0 00:09:54.735 00:09:54.735 FDP Reclaim unit handle status 00:09:54.735 ============================== 00:09:54.735 Number of RUHS descriptors: 2 00:09:54.735 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002b17 00:09:54.735 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:54.735 00:09:54.735 FDP write on placement id: 0 success 00:09:54.735 00:09:54.735 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:54.735 00:09:54.735 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:54.735 00:09:54.735 Get Feature: FDP Events for Placement handle: #0 00:09:54.735 ======================== 00:09:54.735 Number of FDP Events: 6 00:09:54.735 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:54.735 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:54.735 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:54.735 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:54.735 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:54.735 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:54.735 00:09:54.735 FDP events log page 00:09:54.735 =================== 00:09:54.735 Number of FDP events: 1 00:09:54.735 FDP Event #0: 00:09:54.735 Event Type: RU Not Written to Capacity 00:09:54.735 Placement Identifier: Valid 00:09:54.735 NSID: Valid 00:09:54.735 Location: Valid 00:09:54.735 Placement Identifier: 0 00:09:54.735 Event Timestamp: 5 00:09:54.735 Namespace Identifier: 1 00:09:54.735 Reclaim Group Identifier: 0 00:09:54.735 Reclaim Unit Handle Identifier: 0 00:09:54.735 00:09:54.735 FDP test passed 00:09:54.735 00:09:54.735 real 0m0.212s 00:09:54.735 user 0m0.056s 00:09:54.735 sys 0m0.055s 00:09:54.735 ************************************ 00:09:54.735 END TEST nvme_flexible_data_placement 00:09:54.735 23:58:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.735 23:58:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:54.735 ************************************ 00:09:54.735 00:09:54.735 real 0m7.757s 00:09:54.735 user 0m1.035s 00:09:54.735 sys 0m1.480s 00:09:54.735 23:58:45 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.735 ************************************ 00:09:54.735 END TEST nvme_fdp 00:09:54.735 ************************************ 00:09:54.735 23:58:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:54.995 23:58:45 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:54.995 23:58:45 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:54.995 23:58:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:54.995 23:58:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.995 23:58:45 -- common/autotest_common.sh@10 -- # set +x 00:09:54.995 ************************************ 00:09:54.995 START TEST nvme_rpc 00:09:54.995 ************************************ 00:09:54.995 23:58:45 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:54.995 * Looking for test storage... 00:09:54.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:54.995 23:58:45 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:54.995 23:58:45 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:54.995 23:58:45 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:54.995 23:58:45 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.995 23:58:45 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:54.995 23:58:45 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.995 23:58:45 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:54.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.995 --rc genhtml_branch_coverage=1 00:09:54.995 --rc genhtml_function_coverage=1 00:09:54.995 --rc genhtml_legend=1 00:09:54.995 --rc geninfo_all_blocks=1 00:09:54.995 --rc geninfo_unexecuted_blocks=1 00:09:54.995 00:09:54.995 ' 00:09:54.995 23:58:45 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:54.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.995 --rc genhtml_branch_coverage=1 00:09:54.995 --rc genhtml_function_coverage=1 00:09:54.995 --rc genhtml_legend=1 00:09:54.995 --rc geninfo_all_blocks=1 00:09:54.995 --rc geninfo_unexecuted_blocks=1 00:09:54.995 00:09:54.995 ' 00:09:54.995 23:58:45 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:54.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.995 --rc genhtml_branch_coverage=1 00:09:54.995 --rc genhtml_function_coverage=1 00:09:54.995 --rc genhtml_legend=1 00:09:54.995 --rc geninfo_all_blocks=1 00:09:54.995 --rc geninfo_unexecuted_blocks=1 00:09:54.995 00:09:54.995 ' 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:54.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.996 --rc genhtml_branch_coverage=1 00:09:54.996 --rc genhtml_function_coverage=1 00:09:54.996 --rc genhtml_legend=1 00:09:54.996 --rc geninfo_all_blocks=1 00:09:54.996 --rc geninfo_unexecuted_blocks=1 00:09:54.996 00:09:54.996 ' 00:09:54.996 23:58:45 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:54.996 23:58:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:54.996 23:58:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:54.996 23:58:45 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77521 00:09:54.996 23:58:45 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:54.996 23:58:45 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:54.996 23:58:45 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77521 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 77521 ']' 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.996 23:58:45 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.255 [2024-11-20 23:58:45.467449] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:55.255 [2024-11-20 23:58:45.467602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77521 ] 00:09:55.255 [2024-11-20 23:58:45.605931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:55.255 [2024-11-20 23:58:45.657703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.255 [2024-11-20 23:58:45.657763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.195 23:58:46 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:56.195 23:58:46 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:56.195 23:58:46 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:56.195 Nvme0n1 00:09:56.195 23:58:46 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:56.195 23:58:46 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:56.455 request: 00:09:56.455 { 00:09:56.455 "bdev_name": "Nvme0n1", 00:09:56.455 "filename": "non_existing_file", 00:09:56.455 "method": "bdev_nvme_apply_firmware", 00:09:56.455 "req_id": 1 00:09:56.455 } 00:09:56.455 Got JSON-RPC error response 00:09:56.455 response: 00:09:56.455 { 00:09:56.455 "code": -32603, 00:09:56.455 "message": "open file failed." 00:09:56.455 } 00:09:56.455 23:58:46 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:56.455 23:58:46 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:56.455 23:58:46 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:56.716 23:58:47 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:56.716 23:58:47 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77521 00:09:56.716 23:58:47 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 77521 ']' 00:09:56.716 23:58:47 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 77521 00:09:56.716 23:58:47 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:56.716 23:58:47 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.716 23:58:47 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77521 00:09:56.716 23:58:47 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:56.716 23:58:47 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:56.716 23:58:47 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77521' 00:09:56.716 killing process with pid 77521 00:09:56.716 23:58:47 nvme_rpc -- common/autotest_common.sh@969 -- # kill 77521 00:09:56.716 23:58:47 nvme_rpc -- common/autotest_common.sh@974 -- # wait 77521 00:09:56.977 00:09:56.977 real 0m2.182s 00:09:56.977 user 0m4.263s 00:09:56.977 sys 0m0.518s 00:09:56.977 23:58:47 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.977 23:58:47 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.977 ************************************ 00:09:56.977 END TEST nvme_rpc 00:09:56.977 ************************************ 00:09:57.239 23:58:47 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:57.239 23:58:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:57.239 23:58:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.239 23:58:47 -- common/autotest_common.sh@10 -- # set +x 00:09:57.239 ************************************ 00:09:57.239 START TEST nvme_rpc_timeouts 00:09:57.239 ************************************ 00:09:57.239 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:57.239 * Looking for test storage... 00:09:57.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:57.239 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:57.239 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:09:57.239 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:57.239 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.239 23:58:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:57.239 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.239 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:57.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.239 --rc genhtml_branch_coverage=1 00:09:57.239 --rc genhtml_function_coverage=1 00:09:57.239 --rc genhtml_legend=1 00:09:57.239 --rc geninfo_all_blocks=1 00:09:57.239 --rc geninfo_unexecuted_blocks=1 00:09:57.239 00:09:57.239 ' 00:09:57.239 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:57.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.239 --rc genhtml_branch_coverage=1 00:09:57.239 --rc genhtml_function_coverage=1 00:09:57.239 --rc genhtml_legend=1 00:09:57.239 --rc geninfo_all_blocks=1 00:09:57.239 --rc geninfo_unexecuted_blocks=1 00:09:57.239 00:09:57.239 ' 00:09:57.239 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:57.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.239 --rc genhtml_branch_coverage=1 00:09:57.239 --rc genhtml_function_coverage=1 00:09:57.239 --rc genhtml_legend=1 00:09:57.239 --rc geninfo_all_blocks=1 00:09:57.239 --rc geninfo_unexecuted_blocks=1 00:09:57.239 00:09:57.239 ' 00:09:57.239 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:57.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.239 --rc genhtml_branch_coverage=1 00:09:57.239 --rc genhtml_function_coverage=1 00:09:57.239 --rc genhtml_legend=1 00:09:57.239 --rc geninfo_all_blocks=1 00:09:57.239 --rc geninfo_unexecuted_blocks=1 00:09:57.239 00:09:57.239 ' 00:09:57.239 23:58:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.239 23:58:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77575 00:09:57.240 23:58:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77575 00:09:57.240 23:58:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77607 00:09:57.240 23:58:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:57.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.240 23:58:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77607 00:09:57.240 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 77607 ']' 00:09:57.240 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.240 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.240 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.240 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.240 23:58:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:57.240 23:58:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:57.240 [2024-11-20 23:58:47.631308] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:57.240 [2024-11-20 23:58:47.631437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77607 ] 00:09:57.501 [2024-11-20 23:58:47.769746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:57.501 [2024-11-20 23:58:47.809216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.501 [2024-11-20 23:58:47.809335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.072 23:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.072 Checking default timeout settings: 00:09:58.072 23:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:58.072 23:58:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:58.072 23:58:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:58.642 Making settings changes with rpc: 00:09:58.642 23:58:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:58.642 23:58:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:58.642 Check default vs. modified settings: 00:09:58.642 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:58.642 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77575 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77575 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:59.215 Setting action_on_timeout is changed as expected. 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77575 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77575 00:09:59.215 Setting timeout_us is changed as expected. 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77575 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77575 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:59.215 Setting timeout_admin_us is changed as expected. 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77575 /tmp/settings_modified_77575 00:09:59.215 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77607 00:09:59.215 23:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 77607 ']' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 77607 00:09:59.215 23:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:59.215 23:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77607 00:09:59.215 23:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.215 23:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.215 killing process with pid 77607 00:09:59.215 23:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77607' 00:09:59.215 23:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 77607 00:09:59.215 23:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 77607 00:09:59.788 RPC TIMEOUT SETTING TEST PASSED. 00:09:59.788 23:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:59.788 00:09:59.788 real 0m2.531s 00:09:59.788 user 0m4.970s 00:09:59.788 sys 0m0.526s 00:09:59.788 ************************************ 00:09:59.788 END TEST nvme_rpc_timeouts 00:09:59.788 ************************************ 00:09:59.788 23:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.788 23:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:59.788 23:58:50 -- spdk/autotest.sh@239 -- # uname -s 00:09:59.788 23:58:50 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:59.788 23:58:50 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:59.788 23:58:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:59.788 23:58:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.788 23:58:50 -- common/autotest_common.sh@10 -- # set +x 00:09:59.788 ************************************ 00:09:59.788 START TEST sw_hotplug 00:09:59.788 ************************************ 00:09:59.788 23:58:50 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:59.788 * Looking for test storage... 00:09:59.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:59.788 23:58:50 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:59.788 23:58:50 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:09:59.788 23:58:50 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:59.788 23:58:50 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.788 23:58:50 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:59.788 23:58:50 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.788 23:58:50 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:59.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.788 --rc genhtml_branch_coverage=1 00:09:59.788 --rc genhtml_function_coverage=1 00:09:59.788 --rc genhtml_legend=1 00:09:59.788 --rc geninfo_all_blocks=1 00:09:59.788 --rc geninfo_unexecuted_blocks=1 00:09:59.788 00:09:59.788 ' 00:09:59.788 23:58:50 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:59.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.788 --rc genhtml_branch_coverage=1 00:09:59.788 --rc genhtml_function_coverage=1 00:09:59.788 --rc genhtml_legend=1 00:09:59.788 --rc geninfo_all_blocks=1 00:09:59.788 --rc geninfo_unexecuted_blocks=1 00:09:59.788 00:09:59.788 ' 00:09:59.788 23:58:50 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:59.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.788 --rc genhtml_branch_coverage=1 00:09:59.788 --rc genhtml_function_coverage=1 00:09:59.788 --rc genhtml_legend=1 00:09:59.788 --rc geninfo_all_blocks=1 00:09:59.788 --rc geninfo_unexecuted_blocks=1 00:09:59.788 00:09:59.788 ' 00:09:59.788 23:58:50 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:59.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.788 --rc genhtml_branch_coverage=1 00:09:59.788 --rc genhtml_function_coverage=1 00:09:59.788 --rc genhtml_legend=1 00:09:59.788 --rc geninfo_all_blocks=1 00:09:59.788 --rc geninfo_unexecuted_blocks=1 00:09:59.788 00:09:59.788 ' 00:09:59.788 23:58:50 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:00.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:00.360 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:00.360 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:00.360 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:00.360 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:00.360 23:58:50 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:00.360 23:58:50 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:00.360 23:58:50 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:00.360 23:58:50 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:00.360 23:58:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:00.361 23:58:50 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:00.361 23:58:50 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:00.361 23:58:50 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:00.361 23:58:50 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:00.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:00.882 Waiting for block devices as requested 00:10:00.882 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:00.882 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:01.143 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:01.143 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:06.426 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:06.426 23:58:56 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:06.426 23:58:56 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:06.687 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:06.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:06.687 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:06.946 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:07.204 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:07.204 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:07.463 23:58:57 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:07.463 23:58:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:07.463 23:58:57 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:07.463 23:58:57 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:07.463 23:58:57 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78453 00:10:07.463 23:58:57 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:07.463 23:58:57 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:07.463 23:58:57 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:07.463 23:58:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:07.463 23:58:57 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:07.463 23:58:57 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:07.463 23:58:57 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:07.463 23:58:57 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:07.463 23:58:57 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:07.463 23:58:57 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:07.463 23:58:57 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:07.463 23:58:57 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:07.463 23:58:57 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:07.464 23:58:57 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:07.723 Initializing NVMe Controllers 00:10:07.723 Attaching to 0000:00:10.0 00:10:07.723 Attaching to 0000:00:11.0 00:10:07.723 Attached to 0000:00:11.0 00:10:07.723 Attached to 0000:00:10.0 00:10:07.723 Initialization complete. Starting I/O... 00:10:07.723 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:07.723 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:07.723 00:10:08.664 QEMU NVMe Ctrl (12341 ): 2580 I/Os completed (+2580) 00:10:08.664 QEMU NVMe Ctrl (12340 ): 2586 I/Os completed (+2586) 00:10:08.664 00:10:09.610 QEMU NVMe Ctrl (12341 ): 5694 I/Os completed (+3114) 00:10:09.610 QEMU NVMe Ctrl (12340 ): 5682 I/Os completed (+3096) 00:10:09.610 00:10:10.554 QEMU NVMe Ctrl (12341 ): 8870 I/Os completed (+3176) 00:10:10.554 QEMU NVMe Ctrl (12340 ): 8865 I/Os completed (+3183) 00:10:10.554 00:10:11.939 QEMU NVMe Ctrl (12341 ): 12002 I/Os completed (+3132) 00:10:11.939 QEMU NVMe Ctrl (12340 ): 11997 I/Os completed (+3132) 00:10:11.939 00:10:12.617 QEMU NVMe Ctrl (12341 ): 15590 I/Os completed (+3588) 00:10:12.617 QEMU NVMe Ctrl (12340 ): 15585 I/Os completed (+3588) 00:10:12.617 00:10:13.565 23:59:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:13.565 23:59:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:13.565 23:59:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:13.565 [2024-11-20 23:59:03.768926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:13.565 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:13.565 [2024-11-20 23:59:03.770378] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 [2024-11-20 23:59:03.770548] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 [2024-11-20 23:59:03.770595] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 [2024-11-20 23:59:03.770614] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:13.565 [2024-11-20 23:59:03.771948] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 [2024-11-20 23:59:03.772008] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 [2024-11-20 23:59:03.772023] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 [2024-11-20 23:59:03.772039] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 23:59:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:13.565 23:59:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:13.565 [2024-11-20 23:59:03.790644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:13.565 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:13.565 [2024-11-20 23:59:03.792027] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 [2024-11-20 23:59:03.792190] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 [2024-11-20 23:59:03.792234] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 [2024-11-20 23:59:03.792439] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:13.565 [2024-11-20 23:59:03.793877] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 [2024-11-20 23:59:03.794025] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 [2024-11-20 23:59:03.794111] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 [2024-11-20 23:59:03.794142] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.565 23:59:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:13.565 23:59:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:13.565 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:13.565 EAL: Scan for (pci) bus failed. 00:10:13.565 00:10:13.565 23:59:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:13.565 23:59:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:13.565 23:59:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:13.828 23:59:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:13.828 23:59:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:13.828 23:59:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:13.828 23:59:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:13.828 23:59:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:13.828 Attaching to 0000:00:10.0 00:10:13.828 Attached to 0000:00:10.0 00:10:13.828 23:59:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:13.828 23:59:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:13.828 23:59:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:13.828 Attaching to 0000:00:11.0 00:10:13.828 Attached to 0000:00:11.0 00:10:14.772 QEMU NVMe Ctrl (12340 ): 2448 I/Os completed (+2448) 00:10:14.772 QEMU NVMe Ctrl (12341 ): 2172 I/Os completed (+2172) 00:10:14.772 00:10:15.713 QEMU NVMe Ctrl (12340 ): 5012 I/Os completed (+2564) 00:10:15.713 QEMU NVMe Ctrl (12341 ): 4749 I/Os completed (+2577) 00:10:15.713 00:10:16.656 QEMU NVMe Ctrl (12340 ): 7826 I/Os completed (+2814) 00:10:16.656 QEMU NVMe Ctrl (12341 ): 7562 I/Os completed (+2813) 00:10:16.656 00:10:17.594 QEMU NVMe Ctrl (12340 ): 11110 I/Os completed (+3284) 00:10:17.594 QEMU NVMe Ctrl (12341 ): 10846 I/Os completed (+3284) 00:10:17.594 00:10:18.528 QEMU NVMe Ctrl (12340 ): 15519 I/Os completed (+4409) 00:10:18.528 QEMU NVMe Ctrl (12341 ): 15346 I/Os completed (+4500) 00:10:18.528 00:10:19.910 QEMU NVMe Ctrl (12340 ): 18859 I/Os completed (+3340) 00:10:19.910 QEMU NVMe Ctrl (12341 ): 18751 I/Os completed (+3405) 00:10:19.910 00:10:20.852 QEMU NVMe Ctrl (12340 ): 21671 I/Os completed (+2812) 00:10:20.852 QEMU NVMe Ctrl (12341 ): 21701 I/Os completed (+2950) 00:10:20.852 00:10:21.792 QEMU NVMe Ctrl (12340 ): 25183 I/Os completed (+3512) 00:10:21.792 QEMU NVMe Ctrl (12341 ): 25217 I/Os completed (+3516) 00:10:21.792 00:10:22.732 QEMU NVMe Ctrl (12340 ): 29032 I/Os completed (+3849) 00:10:22.732 QEMU NVMe Ctrl (12341 ): 29024 I/Os completed (+3807) 00:10:22.732 00:10:23.675 QEMU NVMe Ctrl (12340 ): 32112 I/Os completed (+3080) 00:10:23.675 QEMU NVMe Ctrl (12341 ): 32108 I/Os completed (+3084) 00:10:23.675 00:10:24.618 QEMU NVMe Ctrl (12340 ): 34940 I/Os completed (+2828) 00:10:24.618 QEMU NVMe Ctrl (12341 ): 34940 I/Os completed (+2832) 00:10:24.618 00:10:25.560 QEMU NVMe Ctrl (12340 ): 38012 I/Os completed (+3072) 00:10:25.560 QEMU NVMe Ctrl (12341 ): 38018 I/Os completed (+3078) 00:10:25.560 00:10:25.828 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:25.828 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:25.828 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.828 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.828 [2024-11-20 23:59:16.133409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:25.829 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:25.829 [2024-11-20 23:59:16.134672] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 [2024-11-20 23:59:16.134750] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 [2024-11-20 23:59:16.134776] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 [2024-11-20 23:59:16.134803] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:25.829 [2024-11-20 23:59:16.136848] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 [2024-11-20 23:59:16.136914] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 [2024-11-20 23:59:16.136929] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 [2024-11-20 23:59:16.136945] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.829 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.829 [2024-11-20 23:59:16.159467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:25.829 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:25.829 [2024-11-20 23:59:16.160671] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 [2024-11-20 23:59:16.160829] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 [2024-11-20 23:59:16.160854] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 [2024-11-20 23:59:16.160869] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:25.829 [2024-11-20 23:59:16.162035] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 [2024-11-20 23:59:16.162077] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 [2024-11-20 23:59:16.162096] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 [2024-11-20 23:59:16.162109] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.829 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:25.829 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:26.092 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:26.092 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:26.092 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:26.092 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:26.092 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:26.092 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:26.092 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:26.092 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:26.092 Attaching to 0000:00:10.0 00:10:26.092 Attached to 0000:00:10.0 00:10:26.092 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:26.092 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:26.092 23:59:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:26.092 Attaching to 0000:00:11.0 00:10:26.092 Attached to 0000:00:11.0 00:10:26.675 QEMU NVMe Ctrl (12340 ): 1820 I/Os completed (+1820) 00:10:26.675 QEMU NVMe Ctrl (12341 ): 1561 I/Os completed (+1561) 00:10:26.675 00:10:27.640 QEMU NVMe Ctrl (12340 ): 6010 I/Os completed (+4190) 00:10:27.640 QEMU NVMe Ctrl (12341 ): 5611 I/Os completed (+4050) 00:10:27.640 00:10:28.578 QEMU NVMe Ctrl (12340 ): 10259 I/Os completed (+4249) 00:10:28.578 QEMU NVMe Ctrl (12341 ): 9816 I/Os completed (+4205) 00:10:28.578 00:10:29.519 QEMU NVMe Ctrl (12340 ): 14501 I/Os completed (+4242) 00:10:29.519 QEMU NVMe Ctrl (12341 ): 13924 I/Os completed (+4108) 00:10:29.519 00:10:30.900 QEMU NVMe Ctrl (12340 ): 18451 I/Os completed (+3950) 00:10:30.900 QEMU NVMe Ctrl (12341 ): 17937 I/Os completed (+4013) 00:10:30.900 00:10:31.839 QEMU NVMe Ctrl (12340 ): 21700 I/Os completed (+3249) 00:10:31.839 QEMU NVMe Ctrl (12341 ): 21224 I/Os completed (+3287) 00:10:31.839 00:10:32.782 QEMU NVMe Ctrl (12340 ): 24808 I/Os completed (+3108) 00:10:32.782 QEMU NVMe Ctrl (12341 ): 24347 I/Os completed (+3123) 00:10:32.782 00:10:33.725 QEMU NVMe Ctrl (12340 ): 28212 I/Os completed (+3404) 00:10:33.725 QEMU NVMe Ctrl (12341 ): 27751 I/Os completed (+3404) 00:10:33.725 00:10:34.668 QEMU NVMe Ctrl (12340 ): 31240 I/Os completed (+3028) 00:10:34.668 QEMU NVMe Ctrl (12341 ): 30779 I/Os completed (+3028) 00:10:34.668 00:10:35.611 QEMU NVMe Ctrl (12340 ): 34372 I/Os completed (+3132) 00:10:35.611 QEMU NVMe Ctrl (12341 ): 33911 I/Os completed (+3132) 00:10:35.611 00:10:36.554 QEMU NVMe Ctrl (12340 ): 37680 I/Os completed (+3308) 00:10:36.554 QEMU NVMe Ctrl (12341 ): 37219 I/Os completed (+3308) 00:10:36.554 00:10:37.940 QEMU NVMe Ctrl (12340 ): 41140 I/Os completed (+3460) 00:10:37.940 QEMU NVMe Ctrl (12341 ): 40698 I/Os completed (+3479) 00:10:37.940 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.200 [2024-11-20 23:59:28.433867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:38.200 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:38.200 [2024-11-20 23:59:28.435073] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 [2024-11-20 23:59:28.435343] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 [2024-11-20 23:59:28.435373] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 [2024-11-20 23:59:28.435396] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:38.200 [2024-11-20 23:59:28.436854] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 [2024-11-20 23:59:28.436931] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 [2024-11-20 23:59:28.436947] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 [2024-11-20 23:59:28.436962] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.200 [2024-11-20 23:59:28.454017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:38.200 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:38.200 [2024-11-20 23:59:28.455365] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 [2024-11-20 23:59:28.455413] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 [2024-11-20 23:59:28.455433] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 [2024-11-20 23:59:28.455448] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:38.200 [2024-11-20 23:59:28.456564] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 [2024-11-20 23:59:28.456610] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 [2024-11-20 23:59:28.456625] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 [2024-11-20 23:59:28.456638] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:38.200 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:38.200 EAL: Scan for (pci) bus failed. 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.200 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:38.200 Attaching to 0000:00:10.0 00:10:38.200 Attached to 0000:00:10.0 00:10:38.461 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:38.461 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.461 23:59:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:38.461 Attaching to 0000:00:11.0 00:10:38.461 Attached to 0000:00:11.0 00:10:38.461 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:38.461 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:38.461 [2024-11-20 23:59:28.687499] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:50.670 23:59:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:50.670 23:59:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:50.670 23:59:40 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.92 00:10:50.670 23:59:40 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.92 00:10:50.670 23:59:40 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:50.670 23:59:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.92 00:10:50.670 23:59:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.92 2 00:10:50.670 remove_attach_helper took 42.92s to complete (handling 2 nvme drive(s)) 23:59:40 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:57.251 23:59:46 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78453 00:10:57.251 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78453) - No such process 00:10:57.251 23:59:46 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78453 00:10:57.251 23:59:46 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:57.251 23:59:46 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:57.251 23:59:46 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:57.251 23:59:46 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79009 00:10:57.251 23:59:46 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:57.251 23:59:46 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79009 00:10:57.251 23:59:46 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:57.251 23:59:46 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 79009 ']' 00:10:57.251 23:59:46 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.251 23:59:46 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.251 23:59:46 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.251 23:59:46 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.251 23:59:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.251 [2024-11-20 23:59:46.781231] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:57.251 [2024-11-20 23:59:46.781666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79009 ] 00:10:57.251 [2024-11-20 23:59:46.920107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.251 [2024-11-20 23:59:46.968877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.251 23:59:47 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:57.251 23:59:47 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:57.251 23:59:47 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:57.251 23:59:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.251 23:59:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.251 23:59:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.251 23:59:47 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:57.251 23:59:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:57.251 23:59:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:57.251 23:59:47 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:57.251 23:59:47 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:57.251 23:59:47 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:57.251 23:59:47 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:57.251 23:59:47 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:57.251 23:59:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:57.251 23:59:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:57.251 23:59:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:57.251 23:59:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:57.251 23:59:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:03.836 23:59:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:03.836 23:59:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.836 23:59:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.836 23:59:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.836 23:59:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.836 23:59:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:03.836 23:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.836 23:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.836 23:59:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.836 23:59:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.836 23:59:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.836 23:59:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.836 23:59:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.836 23:59:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.836 23:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:03.836 23:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:03.836 [2024-11-20 23:59:53.717383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:03.836 [2024-11-20 23:59:53.718479] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.837 [2024-11-20 23:59:53.718513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.837 [2024-11-20 23:59:53.718525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.837 [2024-11-20 23:59:53.718538] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.837 [2024-11-20 23:59:53.718547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.837 [2024-11-20 23:59:53.718554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.837 [2024-11-20 23:59:53.718563] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.837 [2024-11-20 23:59:53.718570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.837 [2024-11-20 23:59:53.718578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.837 [2024-11-20 23:59:53.718584] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.837 [2024-11-20 23:59:53.718591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.837 [2024-11-20 23:59:53.718598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.837 [2024-11-20 23:59:54.117376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:03.837 [2024-11-20 23:59:54.118400] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.837 [2024-11-20 23:59:54.118431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.837 [2024-11-20 23:59:54.118440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.837 [2024-11-20 23:59:54.118451] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.837 [2024-11-20 23:59:54.118458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.837 [2024-11-20 23:59:54.118466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.837 [2024-11-20 23:59:54.118473] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.837 [2024-11-20 23:59:54.118481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.837 [2024-11-20 23:59:54.118487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.837 [2024-11-20 23:59:54.118520] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.837 [2024-11-20 23:59:54.118527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.837 [2024-11-20 23:59:54.118534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.837 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:03.837 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.837 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.837 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.837 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.837 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.837 23:59:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.837 23:59:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.837 23:59:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.837 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:03.837 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:04.097 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.097 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.097 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:04.097 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:04.097 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.097 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.097 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.097 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:04.097 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:04.097 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.097 23:59:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.333 00:00:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.333 00:00:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.333 00:00:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.333 [2024-11-21 00:00:06.517578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:16.333 [2024-11-21 00:00:06.518953] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.333 [2024-11-21 00:00:06.519059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.333 [2024-11-21 00:00:06.519130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.333 [2024-11-21 00:00:06.519184] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.333 [2024-11-21 00:00:06.519205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.333 [2024-11-21 00:00:06.519258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.333 [2024-11-21 00:00:06.519321] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.333 [2024-11-21 00:00:06.519338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.333 [2024-11-21 00:00:06.519385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.333 [2024-11-21 00:00:06.519394] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.333 [2024-11-21 00:00:06.519402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.333 [2024-11-21 00:00:06.519409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.333 00:00:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.333 00:00:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.333 00:00:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:16.333 00:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:16.594 [2024-11-21 00:00:06.917588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:16.594 [2024-11-21 00:00:06.918611] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.594 [2024-11-21 00:00:06.918644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.594 [2024-11-21 00:00:06.918655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.594 [2024-11-21 00:00:06.918667] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.594 [2024-11-21 00:00:06.918674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.594 [2024-11-21 00:00:06.918683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.594 [2024-11-21 00:00:06.918690] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.594 [2024-11-21 00:00:06.918698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.594 [2024-11-21 00:00:06.918704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.594 [2024-11-21 00:00:06.918712] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.594 [2024-11-21 00:00:06.918718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.594 [2024-11-21 00:00:06.918727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.855 00:00:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.855 00:00:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.855 00:00:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.855 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:17.115 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:17.115 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.115 00:00:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:29.347 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:29.347 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:29.347 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:29.347 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.347 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.347 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.347 00:00:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.347 00:00:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.347 00:00:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.347 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:29.347 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:29.347 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:29.347 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:29.347 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:29.347 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:29.347 [2024-11-21 00:00:19.417793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:29.347 [2024-11-21 00:00:19.419001] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.347 [2024-11-21 00:00:19.419097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.347 [2024-11-21 00:00:19.419172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.348 [2024-11-21 00:00:19.419220] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.348 [2024-11-21 00:00:19.419246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.348 [2024-11-21 00:00:19.419330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.348 [2024-11-21 00:00:19.419357] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.348 [2024-11-21 00:00:19.419374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.348 [2024-11-21 00:00:19.419442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.348 [2024-11-21 00:00:19.419468] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.348 [2024-11-21 00:00:19.419485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.348 [2024-11-21 00:00:19.419508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.348 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:29.348 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:29.348 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:29.348 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.348 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.348 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.348 00:00:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.348 00:00:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.348 00:00:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.348 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:29.348 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:29.609 [2024-11-21 00:00:19.917802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:29.609 [2024-11-21 00:00:19.918890] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.609 [2024-11-21 00:00:19.918995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.609 [2024-11-21 00:00:19.919056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.609 [2024-11-21 00:00:19.919118] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.609 [2024-11-21 00:00:19.919136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.609 [2024-11-21 00:00:19.919189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.609 [2024-11-21 00:00:19.919244] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.609 [2024-11-21 00:00:19.919262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.609 [2024-11-21 00:00:19.919318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.609 [2024-11-21 00:00:19.919347] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.609 [2024-11-21 00:00:19.919454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.609 [2024-11-21 00:00:19.919524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.609 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:29.609 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:29.609 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:29.609 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.609 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.609 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.609 00:00:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.609 00:00:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.609 00:00:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.609 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:29.609 00:00:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:29.870 00:00:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:29.870 00:00:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:29.870 00:00:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:29.870 00:00:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:29.870 00:00:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.870 00:00:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:29.870 00:00:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:29.870 00:00:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:29.870 00:00:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:29.870 00:00:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.870 00:00:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.64 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.64 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:11:42.106 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:42.106 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:42.106 00:00:32 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:42.107 00:00:32 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:42.107 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:42.107 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:42.107 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:42.107 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:42.107 00:00:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:48.696 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:48.696 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.696 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.696 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.696 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.696 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:48.696 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.696 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.696 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.696 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.697 00:00:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.697 00:00:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.697 00:00:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:48.697 [2024-11-21 00:00:38.393973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:48.697 [2024-11-21 00:00:38.394843] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.697 [2024-11-21 00:00:38.394937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.697 [2024-11-21 00:00:38.395017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.697 [2024-11-21 00:00:38.395095] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.697 [2024-11-21 00:00:38.395116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.697 [2024-11-21 00:00:38.395162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.697 [2024-11-21 00:00:38.395189] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.697 [2024-11-21 00:00:38.395206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.697 [2024-11-21 00:00:38.395255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.697 [2024-11-21 00:00:38.395279] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.697 [2024-11-21 00:00:38.395306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.697 [2024-11-21 00:00:38.395454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.697 [2024-11-21 00:00:38.793980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:48.697 [2024-11-21 00:00:38.794796] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.697 [2024-11-21 00:00:38.794899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.697 [2024-11-21 00:00:38.794961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.697 [2024-11-21 00:00:38.795020] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.697 [2024-11-21 00:00:38.795041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.697 [2024-11-21 00:00:38.795095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.697 [2024-11-21 00:00:38.795121] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.697 [2024-11-21 00:00:38.795138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.697 [2024-11-21 00:00:38.795319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.697 [2024-11-21 00:00:38.795344] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.697 [2024-11-21 00:00:38.795360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.697 [2024-11-21 00:00:38.795386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.697 00:00:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.697 00:00:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.697 00:00:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.697 00:00:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:48.697 00:00:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:48.697 00:00:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.697 00:00:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.697 00:00:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.697 00:00:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:48.956 00:00:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:48.956 00:00:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.956 00:00:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.198 00:00:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.198 00:00:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.198 00:00:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.198 [2024-11-21 00:00:51.194196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:01.198 [2024-11-21 00:00:51.196600] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.198 [2024-11-21 00:00:51.196708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.198 [2024-11-21 00:00:51.196778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.198 [2024-11-21 00:00:51.196827] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.198 [2024-11-21 00:00:51.196846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.198 [2024-11-21 00:00:51.196913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.198 [2024-11-21 00:00:51.196941] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.198 [2024-11-21 00:00:51.196957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.198 [2024-11-21 00:00:51.196981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.198 [2024-11-21 00:00:51.197060] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.198 [2024-11-21 00:00:51.197079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.198 [2024-11-21 00:00:51.197102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.198 00:00:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.198 00:00:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.198 00:00:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:01.198 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:01.198 [2024-11-21 00:00:51.594202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:01.198 [2024-11-21 00:00:51.595024] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.198 [2024-11-21 00:00:51.595126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.198 [2024-11-21 00:00:51.595190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.198 [2024-11-21 00:00:51.595245] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.198 [2024-11-21 00:00:51.595265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.198 [2024-11-21 00:00:51.595337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.198 [2024-11-21 00:00:51.595365] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.198 [2024-11-21 00:00:51.595382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.198 [2024-11-21 00:00:51.595433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.198 [2024-11-21 00:00:51.595521] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.198 [2024-11-21 00:00:51.595540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.198 [2024-11-21 00:00:51.595564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.460 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:01.460 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.460 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.460 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.460 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.460 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.460 00:00:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.460 00:00:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.460 00:00:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.460 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:01.460 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:01.460 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.460 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.460 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:01.721 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:01.721 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.721 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.721 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.721 00:00:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:01.721 00:00:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:01.721 00:00:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.721 00:00:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:13.961 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:13.961 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:13.961 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:13.961 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.961 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.961 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.961 00:01:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.961 00:01:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.961 00:01:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.961 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:13.961 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:13.961 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.961 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.961 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.961 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.961 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:13.962 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:13.962 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:13.962 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.962 [2024-11-21 00:01:04.094405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:13.962 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.962 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.962 [2024-11-21 00:01:04.096671] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.962 [2024-11-21 00:01:04.096700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.962 [2024-11-21 00:01:04.096713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.962 [2024-11-21 00:01:04.096726] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.962 [2024-11-21 00:01:04.096737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.962 [2024-11-21 00:01:04.096745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.962 [2024-11-21 00:01:04.096754] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.962 [2024-11-21 00:01:04.096762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.962 [2024-11-21 00:01:04.096771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.962 [2024-11-21 00:01:04.096778] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.962 [2024-11-21 00:01:04.096788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.962 [2024-11-21 00:01:04.096795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.962 00:01:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.962 00:01:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.962 00:01:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.962 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:13.962 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:14.224 [2024-11-21 00:01:04.594412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:14.224 [2024-11-21 00:01:04.595103] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.224 [2024-11-21 00:01:04.595136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.224 [2024-11-21 00:01:04.595146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.224 [2024-11-21 00:01:04.595159] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.224 [2024-11-21 00:01:04.595166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.224 [2024-11-21 00:01:04.595175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.224 [2024-11-21 00:01:04.595181] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.224 [2024-11-21 00:01:04.595191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.224 [2024-11-21 00:01:04.595197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.224 [2024-11-21 00:01:04.595206] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.224 [2024-11-21 00:01:04.595212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.224 [2024-11-21 00:01:04.595220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.224 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:14.224 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.224 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.224 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.224 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.224 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.224 00:01:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.224 00:01:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.224 00:01:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.486 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:14.486 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:14.486 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.486 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.486 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:14.486 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:14.486 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:14.486 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.486 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.486 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:14.486 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:14.486 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:14.486 00:01:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:26.726 00:01:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:26.726 00:01:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:26.726 00:01:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:26.726 00:01:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.726 00:01:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.726 00:01:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.726 00:01:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.726 00:01:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.726 00:01:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.726 00:01:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:26.726 00:01:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:26.726 00:01:16 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.61 00:12:26.726 00:01:16 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.61 00:12:26.726 00:01:16 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:26.726 00:01:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.61 00:12:26.726 00:01:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.61 2 00:12:26.726 remove_attach_helper took 44.61s to complete (handling 2 nvme drive(s)) 00:01:16 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:26.726 00:01:16 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79009 00:12:26.726 00:01:16 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 79009 ']' 00:12:26.727 00:01:16 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 79009 00:12:26.727 00:01:16 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:26.727 00:01:16 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:26.727 00:01:16 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79009 00:12:26.727 00:01:16 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:26.727 00:01:16 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:26.727 00:01:16 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79009' 00:12:26.727 killing process with pid 79009 00:12:26.727 00:01:16 sw_hotplug -- common/autotest_common.sh@969 -- # kill 79009 00:12:26.727 00:01:16 sw_hotplug -- common/autotest_common.sh@974 -- # wait 79009 00:12:26.988 00:01:17 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:27.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:27.574 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:27.574 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:27.890 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:27.890 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:27.890 00:12:27.890 real 2m28.086s 00:12:27.890 user 1m48.269s 00:12:27.890 sys 0m18.085s 00:12:27.890 ************************************ 00:12:27.890 END TEST sw_hotplug 00:12:27.890 ************************************ 00:12:27.890 00:01:18 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.890 00:01:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.890 00:01:18 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:27.890 00:01:18 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:27.890 00:01:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:27.890 00:01:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.890 00:01:18 -- common/autotest_common.sh@10 -- # set +x 00:12:27.890 ************************************ 00:12:27.890 START TEST nvme_xnvme 00:12:27.890 ************************************ 00:12:27.890 00:01:18 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:27.890 * Looking for test storage... 00:12:27.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:27.890 00:01:18 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:27.890 00:01:18 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:27.890 00:01:18 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:28.151 00:01:18 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:28.151 00:01:18 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.151 00:01:18 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:28.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.151 --rc genhtml_branch_coverage=1 00:12:28.151 --rc genhtml_function_coverage=1 00:12:28.151 --rc genhtml_legend=1 00:12:28.151 --rc geninfo_all_blocks=1 00:12:28.151 --rc geninfo_unexecuted_blocks=1 00:12:28.151 00:12:28.151 ' 00:12:28.151 00:01:18 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:28.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.151 --rc genhtml_branch_coverage=1 00:12:28.151 --rc genhtml_function_coverage=1 00:12:28.151 --rc genhtml_legend=1 00:12:28.151 --rc geninfo_all_blocks=1 00:12:28.151 --rc geninfo_unexecuted_blocks=1 00:12:28.151 00:12:28.151 ' 00:12:28.151 00:01:18 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:28.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.151 --rc genhtml_branch_coverage=1 00:12:28.151 --rc genhtml_function_coverage=1 00:12:28.151 --rc genhtml_legend=1 00:12:28.151 --rc geninfo_all_blocks=1 00:12:28.151 --rc geninfo_unexecuted_blocks=1 00:12:28.151 00:12:28.151 ' 00:12:28.151 00:01:18 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:28.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.151 --rc genhtml_branch_coverage=1 00:12:28.151 --rc genhtml_function_coverage=1 00:12:28.151 --rc genhtml_legend=1 00:12:28.151 --rc geninfo_all_blocks=1 00:12:28.151 --rc geninfo_unexecuted_blocks=1 00:12:28.151 00:12:28.151 ' 00:12:28.151 00:01:18 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.151 00:01:18 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.151 00:01:18 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.152 00:01:18 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.152 00:01:18 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.152 00:01:18 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:28.152 00:01:18 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.152 00:01:18 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:28.152 00:01:18 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:28.152 00:01:18 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.152 00:01:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.152 ************************************ 00:12:28.152 START TEST xnvme_to_malloc_dd_copy 00:12:28.152 ************************************ 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:28.152 00:01:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:28.152 { 00:12:28.152 "subsystems": [ 00:12:28.152 { 00:12:28.152 "subsystem": "bdev", 00:12:28.152 "config": [ 00:12:28.152 { 00:12:28.152 "params": { 00:12:28.152 "block_size": 512, 00:12:28.152 "num_blocks": 2097152, 00:12:28.152 "name": "malloc0" 00:12:28.152 }, 00:12:28.152 "method": "bdev_malloc_create" 00:12:28.152 }, 00:12:28.152 { 00:12:28.152 "params": { 00:12:28.152 "io_mechanism": "libaio", 00:12:28.152 "filename": "/dev/nullb0", 00:12:28.152 "name": "null0" 00:12:28.152 }, 00:12:28.152 "method": "bdev_xnvme_create" 00:12:28.152 }, 00:12:28.152 { 00:12:28.152 "method": "bdev_wait_for_examine" 00:12:28.152 } 00:12:28.152 ] 00:12:28.152 } 00:12:28.152 ] 00:12:28.152 } 00:12:28.152 [2024-11-21 00:01:18.444574] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:28.152 [2024-11-21 00:01:18.444862] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80368 ] 00:12:28.413 [2024-11-21 00:01:18.582368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.413 [2024-11-21 00:01:18.633290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.797  [2024-11-21T00:01:21.160Z] Copying: 220/1024 [MB] (220 MBps) [2024-11-21T00:01:22.101Z] Copying: 442/1024 [MB] (222 MBps) [2024-11-21T00:01:23.043Z] Copying: 665/1024 [MB] (222 MBps) [2024-11-21T00:01:23.304Z] Copying: 930/1024 [MB] (265 MBps) [2024-11-21T00:01:23.878Z] Copying: 1024/1024 [MB] (average 237 MBps) 00:12:33.457 00:12:33.457 00:01:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:33.457 00:01:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:33.457 00:01:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:33.457 00:01:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:33.457 { 00:12:33.457 "subsystems": [ 00:12:33.457 { 00:12:33.457 "subsystem": "bdev", 00:12:33.457 "config": [ 00:12:33.457 { 00:12:33.457 "params": { 00:12:33.457 "block_size": 512, 00:12:33.457 "num_blocks": 2097152, 00:12:33.457 "name": "malloc0" 00:12:33.457 }, 00:12:33.457 "method": "bdev_malloc_create" 00:12:33.457 }, 00:12:33.457 { 00:12:33.457 "params": { 00:12:33.457 "io_mechanism": "libaio", 00:12:33.457 "filename": "/dev/nullb0", 00:12:33.457 "name": "null0" 00:12:33.457 }, 00:12:33.457 "method": "bdev_xnvme_create" 00:12:33.457 }, 00:12:33.457 { 00:12:33.457 "method": "bdev_wait_for_examine" 00:12:33.457 } 00:12:33.457 ] 00:12:33.457 } 00:12:33.457 ] 00:12:33.457 } 00:12:33.457 [2024-11-21 00:01:23.665578] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:33.457 [2024-11-21 00:01:23.665691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80436 ] 00:12:33.457 [2024-11-21 00:01:23.799621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.457 [2024-11-21 00:01:23.834639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.841  [2024-11-21T00:01:26.206Z] Copying: 226/1024 [MB] (226 MBps) [2024-11-21T00:01:27.151Z] Copying: 451/1024 [MB] (224 MBps) [2024-11-21T00:01:28.094Z] Copying: 755/1024 [MB] (304 MBps) [2024-11-21T00:01:28.354Z] Copying: 1024/1024 [MB] (average 264 MBps) 00:12:37.933 00:12:37.933 00:01:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:37.933 00:01:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:37.933 00:01:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:37.933 00:01:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:37.933 00:01:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:37.933 00:01:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:37.933 { 00:12:37.933 "subsystems": [ 00:12:37.933 { 00:12:37.933 "subsystem": "bdev", 00:12:37.933 "config": [ 00:12:37.933 { 00:12:37.933 "params": { 00:12:37.933 "block_size": 512, 00:12:37.933 "num_blocks": 2097152, 00:12:37.933 "name": "malloc0" 00:12:37.933 }, 00:12:37.933 "method": "bdev_malloc_create" 00:12:37.933 }, 00:12:37.933 { 00:12:37.933 "params": { 00:12:37.933 "io_mechanism": "io_uring", 00:12:37.933 "filename": "/dev/nullb0", 00:12:37.933 "name": "null0" 00:12:37.933 }, 00:12:37.933 "method": "bdev_xnvme_create" 00:12:37.933 }, 00:12:37.933 { 00:12:37.933 "method": "bdev_wait_for_examine" 00:12:37.933 } 00:12:37.933 ] 00:12:37.933 } 00:12:37.933 ] 00:12:37.933 } 00:12:37.933 [2024-11-21 00:01:28.345485] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:37.933 [2024-11-21 00:01:28.345596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80496 ] 00:12:38.194 [2024-11-21 00:01:28.480012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.194 [2024-11-21 00:01:28.512544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.578  [2024-11-21T00:01:30.940Z] Copying: 312/1024 [MB] (312 MBps) [2024-11-21T00:01:31.882Z] Copying: 625/1024 [MB] (312 MBps) [2024-11-21T00:01:32.142Z] Copying: 938/1024 [MB] (313 MBps) [2024-11-21T00:01:32.404Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:12:41.983 00:12:41.983 00:01:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:41.983 00:01:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:41.983 00:01:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:41.983 00:01:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:41.983 { 00:12:41.983 "subsystems": [ 00:12:41.983 { 00:12:41.983 "subsystem": "bdev", 00:12:41.983 "config": [ 00:12:41.983 { 00:12:41.983 "params": { 00:12:41.983 "block_size": 512, 00:12:41.983 "num_blocks": 2097152, 00:12:41.983 "name": "malloc0" 00:12:41.983 }, 00:12:41.983 "method": "bdev_malloc_create" 00:12:41.983 }, 00:12:41.983 { 00:12:41.983 "params": { 00:12:41.983 "io_mechanism": "io_uring", 00:12:41.983 "filename": "/dev/nullb0", 00:12:41.983 "name": "null0" 00:12:41.983 }, 00:12:41.983 "method": "bdev_xnvme_create" 00:12:41.983 }, 00:12:41.983 { 00:12:41.983 "method": "bdev_wait_for_examine" 00:12:41.983 } 00:12:41.983 ] 00:12:41.983 } 00:12:41.983 ] 00:12:41.983 } 00:12:41.983 [2024-11-21 00:01:32.362445] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:41.983 [2024-11-21 00:01:32.362714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80550 ] 00:12:42.243 [2024-11-21 00:01:32.498862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.243 [2024-11-21 00:01:32.537574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.627  [2024-11-21T00:01:34.991Z] Copying: 319/1024 [MB] (319 MBps) [2024-11-21T00:01:35.933Z] Copying: 639/1024 [MB] (320 MBps) [2024-11-21T00:01:36.194Z] Copying: 959/1024 [MB] (320 MBps) [2024-11-21T00:01:36.454Z] Copying: 1024/1024 [MB] (average 320 MBps) 00:12:46.033 00:12:46.034 00:01:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:46.034 00:01:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:46.034 00:12:46.034 real 0m17.949s 00:12:46.034 user 0m14.969s 00:12:46.034 sys 0m2.458s 00:12:46.034 ************************************ 00:12:46.034 00:01:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.034 00:01:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:46.034 END TEST xnvme_to_malloc_dd_copy 00:12:46.034 ************************************ 00:12:46.034 00:01:36 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:46.034 00:01:36 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:46.034 00:01:36 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.034 00:01:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:46.034 ************************************ 00:12:46.034 START TEST xnvme_bdevperf 00:12:46.034 ************************************ 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:46.034 00:01:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:46.034 { 00:12:46.034 "subsystems": [ 00:12:46.034 { 00:12:46.034 "subsystem": "bdev", 00:12:46.034 "config": [ 00:12:46.034 { 00:12:46.034 "params": { 00:12:46.034 "io_mechanism": "libaio", 00:12:46.034 "filename": "/dev/nullb0", 00:12:46.034 "name": "null0" 00:12:46.034 }, 00:12:46.034 "method": "bdev_xnvme_create" 00:12:46.034 }, 00:12:46.034 { 00:12:46.034 "method": "bdev_wait_for_examine" 00:12:46.034 } 00:12:46.034 ] 00:12:46.034 } 00:12:46.034 ] 00:12:46.034 } 00:12:46.034 [2024-11-21 00:01:36.432115] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:46.034 [2024-11-21 00:01:36.432223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80629 ] 00:12:46.295 [2024-11-21 00:01:36.567068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.295 [2024-11-21 00:01:36.606379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.295 Running I/O for 5 seconds... 00:12:48.620 203968.00 IOPS, 796.75 MiB/s [2024-11-21T00:01:39.984Z] 204064.00 IOPS, 797.12 MiB/s [2024-11-21T00:01:40.928Z] 204074.67 IOPS, 797.17 MiB/s [2024-11-21T00:01:41.871Z] 203920.00 IOPS, 796.56 MiB/s [2024-11-21T00:01:41.871Z] 203955.20 IOPS, 796.70 MiB/s 00:12:51.450 Latency(us) 00:12:51.450 [2024-11-21T00:01:41.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.450 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:51.450 null0 : 5.00 203891.42 796.45 0.00 0.00 311.68 252.06 1531.27 00:12:51.450 [2024-11-21T00:01:41.871Z] =================================================================================================================== 00:12:51.450 [2024-11-21T00:01:41.871Z] Total : 203891.42 796.45 0.00 0.00 311.68 252.06 1531.27 00:12:51.450 00:01:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:51.450 00:01:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:51.450 00:01:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:51.450 00:01:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:51.450 00:01:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:51.450 00:01:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:51.712 { 00:12:51.712 "subsystems": [ 00:12:51.712 { 00:12:51.712 "subsystem": "bdev", 00:12:51.712 "config": [ 00:12:51.712 { 00:12:51.712 "params": { 00:12:51.712 "io_mechanism": "io_uring", 00:12:51.712 "filename": "/dev/nullb0", 00:12:51.712 "name": "null0" 00:12:51.712 }, 00:12:51.712 "method": "bdev_xnvme_create" 00:12:51.712 }, 00:12:51.712 { 00:12:51.712 "method": "bdev_wait_for_examine" 00:12:51.712 } 00:12:51.712 ] 00:12:51.712 } 00:12:51.712 ] 00:12:51.712 } 00:12:51.712 [2024-11-21 00:01:41.905698] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:51.712 [2024-11-21 00:01:41.905807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80692 ] 00:12:51.712 [2024-11-21 00:01:42.039017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.712 [2024-11-21 00:01:42.073451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.974 Running I/O for 5 seconds... 00:12:53.867 234880.00 IOPS, 917.50 MiB/s [2024-11-21T00:01:45.236Z] 234752.00 IOPS, 917.00 MiB/s [2024-11-21T00:01:46.180Z] 234730.67 IOPS, 916.92 MiB/s [2024-11-21T00:01:47.567Z] 234688.00 IOPS, 916.75 MiB/s 00:12:57.146 Latency(us) 00:12:57.146 [2024-11-21T00:01:47.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.146 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:57.146 null0 : 5.00 234631.76 916.53 0.00 0.00 270.72 148.09 1474.56 00:12:57.146 [2024-11-21T00:01:47.567Z] =================================================================================================================== 00:12:57.146 [2024-11-21T00:01:47.567Z] Total : 234631.76 916.53 0.00 0.00 270.72 148.09 1474.56 00:12:57.146 00:01:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:57.146 00:01:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:57.146 00:12:57.146 real 0m10.953s 00:12:57.146 user 0m8.590s 00:12:57.146 sys 0m2.132s 00:12:57.146 00:01:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:57.146 00:01:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:57.146 ************************************ 00:12:57.146 END TEST xnvme_bdevperf 00:12:57.146 ************************************ 00:12:57.146 00:12:57.146 real 0m29.175s 00:12:57.146 user 0m23.670s 00:12:57.146 sys 0m4.712s 00:12:57.146 00:01:47 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:57.146 00:01:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.146 ************************************ 00:12:57.146 END TEST nvme_xnvme 00:12:57.146 ************************************ 00:12:57.146 00:01:47 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:57.146 00:01:47 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:57.146 00:01:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:57.146 00:01:47 -- common/autotest_common.sh@10 -- # set +x 00:12:57.146 ************************************ 00:12:57.146 START TEST blockdev_xnvme 00:12:57.146 ************************************ 00:12:57.146 00:01:47 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:57.146 * Looking for test storage... 00:12:57.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:57.146 00:01:47 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:57.146 00:01:47 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:57.146 00:01:47 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:57.146 00:01:47 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.146 00:01:47 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:57.407 00:01:47 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.408 00:01:47 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.408 00:01:47 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.408 00:01:47 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:57.408 00:01:47 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.408 00:01:47 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:57.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.408 --rc genhtml_branch_coverage=1 00:12:57.408 --rc genhtml_function_coverage=1 00:12:57.408 --rc genhtml_legend=1 00:12:57.408 --rc geninfo_all_blocks=1 00:12:57.408 --rc geninfo_unexecuted_blocks=1 00:12:57.408 00:12:57.408 ' 00:12:57.408 00:01:47 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:57.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.408 --rc genhtml_branch_coverage=1 00:12:57.408 --rc genhtml_function_coverage=1 00:12:57.408 --rc genhtml_legend=1 00:12:57.408 --rc geninfo_all_blocks=1 00:12:57.408 --rc geninfo_unexecuted_blocks=1 00:12:57.408 00:12:57.408 ' 00:12:57.408 00:01:47 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:57.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.408 --rc genhtml_branch_coverage=1 00:12:57.408 --rc genhtml_function_coverage=1 00:12:57.408 --rc genhtml_legend=1 00:12:57.408 --rc geninfo_all_blocks=1 00:12:57.408 --rc geninfo_unexecuted_blocks=1 00:12:57.408 00:12:57.408 ' 00:12:57.408 00:01:47 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:57.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.408 --rc genhtml_branch_coverage=1 00:12:57.408 --rc genhtml_function_coverage=1 00:12:57.408 --rc genhtml_legend=1 00:12:57.408 --rc geninfo_all_blocks=1 00:12:57.408 --rc geninfo_unexecuted_blocks=1 00:12:57.408 00:12:57.408 ' 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80831 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 80831 00:12:57.408 00:01:47 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 80831 ']' 00:12:57.408 00:01:47 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.408 00:01:47 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:57.408 00:01:47 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:57.408 00:01:47 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.408 00:01:47 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:57.408 00:01:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.408 [2024-11-21 00:01:47.647445] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:57.408 [2024-11-21 00:01:47.647756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80831 ] 00:12:57.408 [2024-11-21 00:01:47.783059] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.667 [2024-11-21 00:01:47.831954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.239 00:01:48 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:58.239 00:01:48 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:58.239 00:01:48 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:58.239 00:01:48 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:58.239 00:01:48 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:58.239 00:01:48 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:58.239 00:01:48 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:58.500 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:58.760 Waiting for block devices as requested 00:12:58.760 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:58.760 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:58.760 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:59.020 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.440 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.440 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:04.440 nvme0n1 00:13:04.440 nvme1n1 00:13:04.440 nvme2n1 00:13:04.440 nvme2n2 00:13:04.440 nvme2n3 00:13:04.440 nvme3n1 00:13:04.440 00:01:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "bb5b5327-2d3f-4d87-9103-c03c4465ce30"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "bb5b5327-2d3f-4d87-9103-c03c4465ce30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "ea8fbd00-143e-48a0-908c-0ff4e72b7988"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ea8fbd00-143e-48a0-908c-0ff4e72b7988",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b54b6bf0-99ca-441e-96de-0d64c3b3b168"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b54b6bf0-99ca-441e-96de-0d64c3b3b168",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e48d308e-397b-4506-8153-8f39fa376c5a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e48d308e-397b-4506-8153-8f39fa376c5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "f7e313e3-2032-44d8-b23a-55ec85ce03bd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f7e313e3-2032-44d8-b23a-55ec85ce03bd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "405bd802-f97b-4843-a295-146ee91d8710"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "405bd802-f97b-4843-a295-146ee91d8710",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 80831 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 80831 ']' 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 80831 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80831 00:13:04.441 killing process with pid 80831 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80831' 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 80831 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 80831 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:04.441 00:01:54 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.441 00:01:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.441 ************************************ 00:13:04.441 START TEST bdev_hello_world 00:13:04.441 ************************************ 00:13:04.441 00:01:54 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:04.702 [2024-11-21 00:01:54.869067] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:04.702 [2024-11-21 00:01:54.869278] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81179 ] 00:13:04.702 [2024-11-21 00:01:54.999641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.702 [2024-11-21 00:01:55.028977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.965 [2024-11-21 00:01:55.186347] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:04.965 [2024-11-21 00:01:55.186509] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:04.965 [2024-11-21 00:01:55.186531] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:04.965 [2024-11-21 00:01:55.188046] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:04.965 [2024-11-21 00:01:55.188437] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:04.965 [2024-11-21 00:01:55.188459] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:04.965 [2024-11-21 00:01:55.188657] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:04.965 00:13:04.965 [2024-11-21 00:01:55.188678] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:04.965 ************************************ 00:13:04.965 END TEST bdev_hello_world 00:13:04.965 ************************************ 00:13:04.965 00:13:04.965 real 0m0.496s 00:13:04.965 user 0m0.264s 00:13:04.965 sys 0m0.125s 00:13:04.965 00:01:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:04.965 00:01:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:04.965 00:01:55 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:04.965 00:01:55 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:04.965 00:01:55 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.965 00:01:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.965 ************************************ 00:13:04.965 START TEST bdev_bounds 00:13:04.965 ************************************ 00:13:04.965 00:01:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:04.965 Process bdevio pid: 81199 00:13:04.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.965 00:01:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81199 00:13:04.965 00:01:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:04.965 00:01:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81199' 00:13:04.965 00:01:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81199 00:13:04.965 00:01:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:04.965 00:01:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81199 ']' 00:13:04.965 00:01:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.965 00:01:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:04.965 00:01:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.965 00:01:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:04.965 00:01:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:05.227 [2024-11-21 00:01:55.418111] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:05.227 [2024-11-21 00:01:55.418223] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81199 ] 00:13:05.227 [2024-11-21 00:01:55.552073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:05.227 [2024-11-21 00:01:55.591704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.227 [2024-11-21 00:01:55.591872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.227 [2024-11-21 00:01:55.591943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.172 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.172 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:06.172 00:01:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:06.172 I/O targets: 00:13:06.172 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:06.172 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:06.172 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:06.172 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:06.172 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:06.172 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:06.172 00:13:06.172 00:13:06.172 CUnit - A unit testing framework for C - Version 2.1-3 00:13:06.172 http://cunit.sourceforge.net/ 00:13:06.172 00:13:06.172 00:13:06.172 Suite: bdevio tests on: nvme3n1 00:13:06.172 Test: blockdev write read block ...passed 00:13:06.172 Test: blockdev write zeroes read block ...passed 00:13:06.172 Test: blockdev write zeroes read no split ...passed 00:13:06.172 Test: blockdev write zeroes read split ...passed 00:13:06.172 Test: blockdev write zeroes read split partial ...passed 00:13:06.172 Test: blockdev reset ...passed 00:13:06.172 Test: blockdev write read 8 blocks ...passed 00:13:06.172 Test: blockdev write read size > 128k ...passed 00:13:06.172 Test: blockdev write read invalid size ...passed 00:13:06.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:06.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:06.172 Test: blockdev write read max offset ...passed 00:13:06.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:06.172 Test: blockdev writev readv 8 blocks ...passed 00:13:06.172 Test: blockdev writev readv 30 x 1block ...passed 00:13:06.172 Test: blockdev writev readv block ...passed 00:13:06.172 Test: blockdev writev readv size > 128k ...passed 00:13:06.172 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:06.172 Test: blockdev comparev and writev ...passed 00:13:06.172 Test: blockdev nvme passthru rw ...passed 00:13:06.172 Test: blockdev nvme passthru vendor specific ...passed 00:13:06.172 Test: blockdev nvme admin passthru ...passed 00:13:06.172 Test: blockdev copy ...passed 00:13:06.172 Suite: bdevio tests on: nvme2n3 00:13:06.172 Test: blockdev write read block ...passed 00:13:06.172 Test: blockdev write zeroes read block ...passed 00:13:06.172 Test: blockdev write zeroes read no split ...passed 00:13:06.172 Test: blockdev write zeroes read split ...passed 00:13:06.172 Test: blockdev write zeroes read split partial ...passed 00:13:06.172 Test: blockdev reset ...passed 00:13:06.172 Test: blockdev write read 8 blocks ...passed 00:13:06.172 Test: blockdev write read size > 128k ...passed 00:13:06.172 Test: blockdev write read invalid size ...passed 00:13:06.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:06.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:06.172 Test: blockdev write read max offset ...passed 00:13:06.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:06.172 Test: blockdev writev readv 8 blocks ...passed 00:13:06.172 Test: blockdev writev readv 30 x 1block ...passed 00:13:06.172 Test: blockdev writev readv block ...passed 00:13:06.172 Test: blockdev writev readv size > 128k ...passed 00:13:06.172 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:06.172 Test: blockdev comparev and writev ...passed 00:13:06.172 Test: blockdev nvme passthru rw ...passed 00:13:06.172 Test: blockdev nvme passthru vendor specific ...passed 00:13:06.172 Test: blockdev nvme admin passthru ...passed 00:13:06.172 Test: blockdev copy ...passed 00:13:06.172 Suite: bdevio tests on: nvme2n2 00:13:06.172 Test: blockdev write read block ...passed 00:13:06.172 Test: blockdev write zeroes read block ...passed 00:13:06.172 Test: blockdev write zeroes read no split ...passed 00:13:06.172 Test: blockdev write zeroes read split ...passed 00:13:06.172 Test: blockdev write zeroes read split partial ...passed 00:13:06.172 Test: blockdev reset ...passed 00:13:06.172 Test: blockdev write read 8 blocks ...passed 00:13:06.172 Test: blockdev write read size > 128k ...passed 00:13:06.172 Test: blockdev write read invalid size ...passed 00:13:06.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:06.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:06.172 Test: blockdev write read max offset ...passed 00:13:06.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:06.172 Test: blockdev writev readv 8 blocks ...passed 00:13:06.172 Test: blockdev writev readv 30 x 1block ...passed 00:13:06.172 Test: blockdev writev readv block ...passed 00:13:06.172 Test: blockdev writev readv size > 128k ...passed 00:13:06.172 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:06.172 Test: blockdev comparev and writev ...passed 00:13:06.172 Test: blockdev nvme passthru rw ...passed 00:13:06.172 Test: blockdev nvme passthru vendor specific ...passed 00:13:06.172 Test: blockdev nvme admin passthru ...passed 00:13:06.172 Test: blockdev copy ...passed 00:13:06.172 Suite: bdevio tests on: nvme2n1 00:13:06.172 Test: blockdev write read block ...passed 00:13:06.172 Test: blockdev write zeroes read block ...passed 00:13:06.172 Test: blockdev write zeroes read no split ...passed 00:13:06.172 Test: blockdev write zeroes read split ...passed 00:13:06.172 Test: blockdev write zeroes read split partial ...passed 00:13:06.172 Test: blockdev reset ...passed 00:13:06.172 Test: blockdev write read 8 blocks ...passed 00:13:06.172 Test: blockdev write read size > 128k ...passed 00:13:06.172 Test: blockdev write read invalid size ...passed 00:13:06.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:06.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:06.172 Test: blockdev write read max offset ...passed 00:13:06.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:06.172 Test: blockdev writev readv 8 blocks ...passed 00:13:06.172 Test: blockdev writev readv 30 x 1block ...passed 00:13:06.172 Test: blockdev writev readv block ...passed 00:13:06.172 Test: blockdev writev readv size > 128k ...passed 00:13:06.172 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:06.172 Test: blockdev comparev and writev ...passed 00:13:06.172 Test: blockdev nvme passthru rw ...passed 00:13:06.172 Test: blockdev nvme passthru vendor specific ...passed 00:13:06.172 Test: blockdev nvme admin passthru ...passed 00:13:06.172 Test: blockdev copy ...passed 00:13:06.172 Suite: bdevio tests on: nvme1n1 00:13:06.172 Test: blockdev write read block ...passed 00:13:06.173 Test: blockdev write zeroes read block ...passed 00:13:06.173 Test: blockdev write zeroes read no split ...passed 00:13:06.173 Test: blockdev write zeroes read split ...passed 00:13:06.173 Test: blockdev write zeroes read split partial ...passed 00:13:06.173 Test: blockdev reset ...passed 00:13:06.434 Test: blockdev write read 8 blocks ...passed 00:13:06.434 Test: blockdev write read size > 128k ...passed 00:13:06.434 Test: blockdev write read invalid size ...passed 00:13:06.434 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:06.434 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:06.434 Test: blockdev write read max offset ...passed 00:13:06.434 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:06.434 Test: blockdev writev readv 8 blocks ...passed 00:13:06.434 Test: blockdev writev readv 30 x 1block ...passed 00:13:06.434 Test: blockdev writev readv block ...passed 00:13:06.434 Test: blockdev writev readv size > 128k ...passed 00:13:06.434 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:06.434 Test: blockdev comparev and writev ...passed 00:13:06.434 Test: blockdev nvme passthru rw ...passed 00:13:06.434 Test: blockdev nvme passthru vendor specific ...passed 00:13:06.434 Test: blockdev nvme admin passthru ...passed 00:13:06.434 Test: blockdev copy ...passed 00:13:06.434 Suite: bdevio tests on: nvme0n1 00:13:06.434 Test: blockdev write read block ...passed 00:13:06.434 Test: blockdev write zeroes read block ...passed 00:13:06.434 Test: blockdev write zeroes read no split ...passed 00:13:06.434 Test: blockdev write zeroes read split ...passed 00:13:06.434 Test: blockdev write zeroes read split partial ...passed 00:13:06.434 Test: blockdev reset ...passed 00:13:06.434 Test: blockdev write read 8 blocks ...passed 00:13:06.434 Test: blockdev write read size > 128k ...passed 00:13:06.434 Test: blockdev write read invalid size ...passed 00:13:06.434 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:06.434 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:06.434 Test: blockdev write read max offset ...passed 00:13:06.434 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:06.434 Test: blockdev writev readv 8 blocks ...passed 00:13:06.434 Test: blockdev writev readv 30 x 1block ...passed 00:13:06.435 Test: blockdev writev readv block ...passed 00:13:06.435 Test: blockdev writev readv size > 128k ...passed 00:13:06.435 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:06.435 Test: blockdev comparev and writev ...passed 00:13:06.435 Test: blockdev nvme passthru rw ...passed 00:13:06.435 Test: blockdev nvme passthru vendor specific ...passed 00:13:06.435 Test: blockdev nvme admin passthru ...passed 00:13:06.435 Test: blockdev copy ...passed 00:13:06.435 00:13:06.435 Run Summary: Type Total Ran Passed Failed Inactive 00:13:06.435 suites 6 6 n/a 0 0 00:13:06.435 tests 138 138 138 0 0 00:13:06.435 asserts 780 780 780 0 n/a 00:13:06.435 00:13:06.435 Elapsed time = 0.627 seconds 00:13:06.435 0 00:13:06.435 00:01:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81199 00:13:06.435 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81199 ']' 00:13:06.435 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81199 00:13:06.435 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:06.435 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:06.435 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81199 00:13:06.435 killing process with pid 81199 00:13:06.435 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:06.435 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:06.435 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81199' 00:13:06.435 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81199 00:13:06.435 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81199 00:13:06.697 ************************************ 00:13:06.697 END TEST bdev_bounds 00:13:06.697 ************************************ 00:13:06.697 00:01:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:06.697 00:13:06.697 real 0m1.554s 00:13:06.697 user 0m3.846s 00:13:06.697 sys 0m0.281s 00:13:06.697 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.697 00:01:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:06.697 00:01:56 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:06.697 00:01:56 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:06.697 00:01:56 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:06.697 00:01:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.697 ************************************ 00:13:06.697 START TEST bdev_nbd 00:13:06.697 ************************************ 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81253 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81253 /var/tmp/spdk-nbd.sock 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81253 ']' 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:06.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.697 00:01:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:06.697 [2024-11-21 00:01:57.055128] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:06.697 [2024-11-21 00:01:57.055479] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.959 [2024-11-21 00:01:57.193534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.959 [2024-11-21 00:01:57.244424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:07.538 00:01:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.799 1+0 records in 00:13:07.799 1+0 records out 00:13:07.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000942012 s, 4.3 MB/s 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:07.799 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.060 1+0 records in 00:13:08.060 1+0 records out 00:13:08.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131627 s, 3.1 MB/s 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:08.060 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.321 1+0 records in 00:13:08.321 1+0 records out 00:13:08.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113336 s, 3.6 MB/s 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.321 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:08.322 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:08.322 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:08.322 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:08.322 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.583 1+0 records in 00:13:08.583 1+0 records out 00:13:08.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559239 s, 7.3 MB/s 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:08.583 00:01:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.845 1+0 records in 00:13:08.845 1+0 records out 00:13:08.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114089 s, 3.6 MB/s 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:08.845 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.105 1+0 records in 00:13:09.105 1+0 records out 00:13:09.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000928906 s, 4.4 MB/s 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:09.105 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:09.364 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:09.364 { 00:13:09.364 "nbd_device": "/dev/nbd0", 00:13:09.364 "bdev_name": "nvme0n1" 00:13:09.364 }, 00:13:09.364 { 00:13:09.364 "nbd_device": "/dev/nbd1", 00:13:09.364 "bdev_name": "nvme1n1" 00:13:09.364 }, 00:13:09.364 { 00:13:09.364 "nbd_device": "/dev/nbd2", 00:13:09.364 "bdev_name": "nvme2n1" 00:13:09.364 }, 00:13:09.364 { 00:13:09.364 "nbd_device": "/dev/nbd3", 00:13:09.364 "bdev_name": "nvme2n2" 00:13:09.365 }, 00:13:09.365 { 00:13:09.365 "nbd_device": "/dev/nbd4", 00:13:09.365 "bdev_name": "nvme2n3" 00:13:09.365 }, 00:13:09.365 { 00:13:09.365 "nbd_device": "/dev/nbd5", 00:13:09.365 "bdev_name": "nvme3n1" 00:13:09.365 } 00:13:09.365 ]' 00:13:09.365 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:09.365 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:09.365 { 00:13:09.365 "nbd_device": "/dev/nbd0", 00:13:09.365 "bdev_name": "nvme0n1" 00:13:09.365 }, 00:13:09.365 { 00:13:09.365 "nbd_device": "/dev/nbd1", 00:13:09.365 "bdev_name": "nvme1n1" 00:13:09.365 }, 00:13:09.365 { 00:13:09.365 "nbd_device": "/dev/nbd2", 00:13:09.365 "bdev_name": "nvme2n1" 00:13:09.365 }, 00:13:09.365 { 00:13:09.365 "nbd_device": "/dev/nbd3", 00:13:09.365 "bdev_name": "nvme2n2" 00:13:09.365 }, 00:13:09.365 { 00:13:09.365 "nbd_device": "/dev/nbd4", 00:13:09.365 "bdev_name": "nvme2n3" 00:13:09.365 }, 00:13:09.365 { 00:13:09.365 "nbd_device": "/dev/nbd5", 00:13:09.365 "bdev_name": "nvme3n1" 00:13:09.365 } 00:13:09.365 ]' 00:13:09.365 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:09.365 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:09.365 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.365 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:09.365 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.365 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:09.365 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.365 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:09.626 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.626 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.626 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.626 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.626 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.626 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.626 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.626 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.626 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.626 00:01:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:09.887 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:09.887 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:09.887 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:09.887 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.887 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.887 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:09.887 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.887 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.887 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.887 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:10.149 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:10.149 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:10.149 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:10.149 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.149 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.149 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:10.149 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.149 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.149 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.149 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:10.149 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.410 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:10.670 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:10.670 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:10.670 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:10.670 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.670 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.670 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:10.670 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.670 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.670 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:10.670 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.670 00:02:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:10.930 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:11.192 /dev/nbd0 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.192 1+0 records in 00:13:11.192 1+0 records out 00:13:11.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667241 s, 6.1 MB/s 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:11.192 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:11.454 /dev/nbd1 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.454 1+0 records in 00:13:11.454 1+0 records out 00:13:11.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687346 s, 6.0 MB/s 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:11.454 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:11.715 /dev/nbd10 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.715 1+0 records in 00:13:11.715 1+0 records out 00:13:11.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588806 s, 7.0 MB/s 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:11.715 00:02:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:11.977 /dev/nbd11 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.977 1+0 records in 00:13:11.977 1+0 records out 00:13:11.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389748 s, 10.5 MB/s 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:11.977 /dev/nbd12 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.977 1+0 records in 00:13:11.977 1+0 records out 00:13:11.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000784256 s, 5.2 MB/s 00:13:11.977 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:12.239 /dev/nbd13 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.239 1+0 records in 00:13:12.239 1+0 records out 00:13:12.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045981 s, 8.9 MB/s 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.239 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:12.501 { 00:13:12.501 "nbd_device": "/dev/nbd0", 00:13:12.501 "bdev_name": "nvme0n1" 00:13:12.501 }, 00:13:12.501 { 00:13:12.501 "nbd_device": "/dev/nbd1", 00:13:12.501 "bdev_name": "nvme1n1" 00:13:12.501 }, 00:13:12.501 { 00:13:12.501 "nbd_device": "/dev/nbd10", 00:13:12.501 "bdev_name": "nvme2n1" 00:13:12.501 }, 00:13:12.501 { 00:13:12.501 "nbd_device": "/dev/nbd11", 00:13:12.501 "bdev_name": "nvme2n2" 00:13:12.501 }, 00:13:12.501 { 00:13:12.501 "nbd_device": "/dev/nbd12", 00:13:12.501 "bdev_name": "nvme2n3" 00:13:12.501 }, 00:13:12.501 { 00:13:12.501 "nbd_device": "/dev/nbd13", 00:13:12.501 "bdev_name": "nvme3n1" 00:13:12.501 } 00:13:12.501 ]' 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:12.501 { 00:13:12.501 "nbd_device": "/dev/nbd0", 00:13:12.501 "bdev_name": "nvme0n1" 00:13:12.501 }, 00:13:12.501 { 00:13:12.501 "nbd_device": "/dev/nbd1", 00:13:12.501 "bdev_name": "nvme1n1" 00:13:12.501 }, 00:13:12.501 { 00:13:12.501 "nbd_device": "/dev/nbd10", 00:13:12.501 "bdev_name": "nvme2n1" 00:13:12.501 }, 00:13:12.501 { 00:13:12.501 "nbd_device": "/dev/nbd11", 00:13:12.501 "bdev_name": "nvme2n2" 00:13:12.501 }, 00:13:12.501 { 00:13:12.501 "nbd_device": "/dev/nbd12", 00:13:12.501 "bdev_name": "nvme2n3" 00:13:12.501 }, 00:13:12.501 { 00:13:12.501 "nbd_device": "/dev/nbd13", 00:13:12.501 "bdev_name": "nvme3n1" 00:13:12.501 } 00:13:12.501 ]' 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:12.501 /dev/nbd1 00:13:12.501 /dev/nbd10 00:13:12.501 /dev/nbd11 00:13:12.501 /dev/nbd12 00:13:12.501 /dev/nbd13' 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:12.501 /dev/nbd1 00:13:12.501 /dev/nbd10 00:13:12.501 /dev/nbd11 00:13:12.501 /dev/nbd12 00:13:12.501 /dev/nbd13' 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:12.501 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:12.502 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:12.502 256+0 records in 00:13:12.502 256+0 records out 00:13:12.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526583 s, 199 MB/s 00:13:12.502 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.502 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:12.763 256+0 records in 00:13:12.763 256+0 records out 00:13:12.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103522 s, 10.1 MB/s 00:13:12.763 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.763 00:02:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:13.025 256+0 records in 00:13:13.025 256+0 records out 00:13:13.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.220271 s, 4.8 MB/s 00:13:13.025 00:02:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.025 00:02:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:13.287 256+0 records in 00:13:13.287 256+0 records out 00:13:13.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243426 s, 4.3 MB/s 00:13:13.287 00:02:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.287 00:02:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:13.287 256+0 records in 00:13:13.287 256+0 records out 00:13:13.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205327 s, 5.1 MB/s 00:13:13.287 00:02:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.287 00:02:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:13.548 256+0 records in 00:13:13.548 256+0 records out 00:13:13.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.195682 s, 5.4 MB/s 00:13:13.548 00:02:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.548 00:02:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:13.810 256+0 records in 00:13:13.810 256+0 records out 00:13:13.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240905 s, 4.4 MB/s 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.810 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:14.072 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:14.072 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:14.072 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:14.072 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.072 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.072 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:14.072 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:14.072 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.072 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.072 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:14.333 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:14.333 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:14.333 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:14.333 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.333 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.333 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:14.333 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:14.333 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.333 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.333 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:14.594 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:14.595 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:14.595 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:14.595 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.595 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.595 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:14.595 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:14.595 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.595 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.595 00:02:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:14.595 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.857 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:15.118 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:15.118 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:15.118 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:15.118 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.118 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.118 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:15.118 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.118 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.118 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:15.118 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:15.118 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:15.379 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:15.641 malloc_lvol_verify 00:13:15.641 00:02:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:15.641 e43cc68b-7024-405f-9b86-5fc83091dae9 00:13:15.641 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:15.902 05628fd9-a054-45ae-a40c-f70de14d7c92 00:13:15.902 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:16.164 /dev/nbd0 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:16.164 mke2fs 1.47.0 (5-Feb-2023) 00:13:16.164 Discarding device blocks: 0/4096 done 00:13:16.164 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:16.164 00:13:16.164 Allocating group tables: 0/1 done 00:13:16.164 Writing inode tables: 0/1 done 00:13:16.164 Creating journal (1024 blocks): done 00:13:16.164 Writing superblocks and filesystem accounting information: 0/1 done 00:13:16.164 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81253 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81253 ']' 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81253 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:16.164 00:02:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81253 00:13:16.426 00:02:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:16.426 00:02:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:16.426 killing process with pid 81253 00:13:16.426 00:02:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81253' 00:13:16.426 00:02:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81253 00:13:16.426 00:02:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81253 00:13:16.426 00:02:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:16.426 00:13:16.426 real 0m9.776s 00:13:16.426 user 0m13.531s 00:13:16.426 sys 0m3.446s 00:13:16.426 00:02:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.426 ************************************ 00:13:16.426 00:02:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:16.426 END TEST bdev_nbd 00:13:16.426 ************************************ 00:13:16.426 00:02:06 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:16.426 00:02:06 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:16.426 00:02:06 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:16.426 00:02:06 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:16.426 00:02:06 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:16.426 00:02:06 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.426 00:02:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.426 ************************************ 00:13:16.426 START TEST bdev_fio 00:13:16.426 ************************************ 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:16.426 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:16.426 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:16.688 ************************************ 00:13:16.688 START TEST bdev_fio_rw_verify 00:13:16.688 ************************************ 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:16.688 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.689 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:16.689 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:16.689 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:16.689 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:16.689 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.689 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:16.689 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:16.689 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:16.689 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:16.689 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:16.689 00:02:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.689 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.689 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.689 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.689 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.689 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.689 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.689 fio-3.35 00:13:16.689 Starting 6 threads 00:13:28.933 00:13:28.933 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=81642: Thu Nov 21 00:02:17 2024 00:13:28.933 read: IOPS=14.8k, BW=57.7MiB/s (60.5MB/s)(577MiB/10002msec) 00:13:28.933 slat (usec): min=2, max=2539, avg= 6.72, stdev=19.16 00:13:28.933 clat (usec): min=85, max=6605, avg=1251.68, stdev=784.59 00:13:28.933 lat (usec): min=89, max=6621, avg=1258.40, stdev=785.54 00:13:28.933 clat percentiles (usec): 00:13:28.933 | 50.000th=[ 1139], 99.000th=[ 3654], 99.900th=[ 5014], 99.990th=[ 6456], 00:13:28.933 | 99.999th=[ 6587] 00:13:28.933 write: IOPS=15.1k, BW=58.9MiB/s (61.8MB/s)(589MiB/10002msec); 0 zone resets 00:13:28.933 slat (usec): min=12, max=4053, avg=44.84, stdev=156.54 00:13:28.933 clat (usec): min=60, max=13430, avg=1624.94, stdev=936.79 00:13:28.933 lat (usec): min=74, max=13466, avg=1669.78, stdev=950.98 00:13:28.933 clat percentiles (usec): 00:13:28.933 | 50.000th=[ 1467], 99.000th=[ 4555], 99.900th=[ 6194], 99.990th=[ 9241], 00:13:28.933 | 99.999th=[13435] 00:13:28.933 bw ( KiB/s): min=46708, max=113898, per=100.00%, avg=60754.53, stdev=2707.17, samples=114 00:13:28.933 iops : min=11674, max=28474, avg=15187.32, stdev=676.81, samples=114 00:13:28.933 lat (usec) : 100=0.02%, 250=3.23%, 500=9.20%, 750=10.69%, 1000=11.40% 00:13:28.933 lat (msec) : 2=43.60%, 4=20.43%, 10=1.43%, 20=0.01% 00:13:28.933 cpu : usr=42.11%, sys=32.54%, ctx=5625, majf=0, minf=16726 00:13:28.933 IO depths : 1=10.7%, 2=22.9%, 4=51.6%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:28.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.933 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.933 issued rwts: total=147743,150825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.933 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:28.933 00:13:28.933 Run status group 0 (all jobs): 00:13:28.934 READ: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=577MiB (605MB), run=10002-10002msec 00:13:28.934 WRITE: bw=58.9MiB/s (61.8MB/s), 58.9MiB/s-58.9MiB/s (61.8MB/s-61.8MB/s), io=589MiB (618MB), run=10002-10002msec 00:13:28.934 ----------------------------------------------------- 00:13:28.934 Suppressions used: 00:13:28.934 count bytes template 00:13:28.934 6 48 /usr/src/fio/parse.c 00:13:28.934 2981 286176 /usr/src/fio/iolog.c 00:13:28.934 1 8 libtcmalloc_minimal.so 00:13:28.934 1 904 libcrypto.so 00:13:28.934 ----------------------------------------------------- 00:13:28.934 00:13:28.934 00:13:28.934 real 0m11.061s 00:13:28.934 user 0m25.976s 00:13:28.934 sys 0m19.795s 00:13:28.934 00:02:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.934 00:02:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:28.934 ************************************ 00:13:28.934 END TEST bdev_fio_rw_verify 00:13:28.934 ************************************ 00:13:28.934 00:02:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:28.934 00:02:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "bb5b5327-2d3f-4d87-9103-c03c4465ce30"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "bb5b5327-2d3f-4d87-9103-c03c4465ce30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "ea8fbd00-143e-48a0-908c-0ff4e72b7988"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ea8fbd00-143e-48a0-908c-0ff4e72b7988",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b54b6bf0-99ca-441e-96de-0d64c3b3b168"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b54b6bf0-99ca-441e-96de-0d64c3b3b168",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e48d308e-397b-4506-8153-8f39fa376c5a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e48d308e-397b-4506-8153-8f39fa376c5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "f7e313e3-2032-44d8-b23a-55ec85ce03bd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f7e313e3-2032-44d8-b23a-55ec85ce03bd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "405bd802-f97b-4843-a295-146ee91d8710"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "405bd802-f97b-4843-a295-146ee91d8710",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.934 /home/vagrant/spdk_repo/spdk 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:28.934 00:13:28.934 real 0m11.234s 00:13:28.934 user 0m26.052s 00:13:28.934 sys 0m19.868s 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.934 00:02:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:28.934 ************************************ 00:13:28.934 END TEST bdev_fio 00:13:28.934 ************************************ 00:13:28.934 00:02:18 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:28.934 00:02:18 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:28.934 00:02:18 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:28.934 00:02:18 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.934 00:02:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.934 ************************************ 00:13:28.934 START TEST bdev_verify 00:13:28.934 ************************************ 00:13:28.934 00:02:18 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:28.934 [2024-11-21 00:02:18.178409] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:28.934 [2024-11-21 00:02:18.178553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81808 ] 00:13:28.934 [2024-11-21 00:02:18.313970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:28.934 [2024-11-21 00:02:18.365396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.934 [2024-11-21 00:02:18.365444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.934 Running I/O for 5 seconds... 00:13:30.448 20226.00 IOPS, 79.01 MiB/s [2024-11-21T00:02:21.814Z] 22483.50 IOPS, 87.83 MiB/s [2024-11-21T00:02:23.199Z] 22695.33 IOPS, 88.65 MiB/s [2024-11-21T00:02:23.773Z] 23249.50 IOPS, 90.82 MiB/s [2024-11-21T00:02:23.773Z] 23476.00 IOPS, 91.70 MiB/s 00:13:33.352 Latency(us) 00:13:33.352 [2024-11-21T00:02:23.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.352 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:33.352 Verification LBA range: start 0x0 length 0xa0000 00:13:33.352 nvme0n1 : 5.03 1960.22 7.66 0.00 0.00 65173.26 5847.83 110503.78 00:13:33.352 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:33.352 Verification LBA range: start 0xa0000 length 0xa0000 00:13:33.352 nvme0n1 : 5.05 2130.89 8.32 0.00 0.00 59970.30 1638.40 112116.97 00:13:33.352 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:33.352 Verification LBA range: start 0x0 length 0xbd0bd 00:13:33.352 nvme1n1 : 5.08 1467.98 5.73 0.00 0.00 86598.98 1865.26 187130.49 00:13:33.352 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:33.352 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:33.352 nvme1n1 : 5.07 1863.14 7.28 0.00 0.00 68286.33 715.22 138734.67 00:13:33.352 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:33.352 Verification LBA range: start 0x0 length 0x80000 00:13:33.352 nvme2n1 : 5.06 1897.25 7.41 0.00 0.00 66913.42 11141.12 127442.31 00:13:33.352 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:33.352 Verification LBA range: start 0x80000 length 0x80000 00:13:33.352 nvme2n1 : 5.05 2053.63 8.02 0.00 0.00 62042.21 8166.79 115343.36 00:13:33.352 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:33.352 Verification LBA range: start 0x0 length 0x80000 00:13:33.352 nvme2n2 : 5.06 1896.60 7.41 0.00 0.00 66816.70 8721.33 105664.20 00:13:33.352 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:33.352 Verification LBA range: start 0x80000 length 0x80000 00:13:33.352 nvme2n2 : 5.04 2056.95 8.03 0.00 0.00 61874.15 12754.31 111310.38 00:13:33.352 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:33.352 Verification LBA range: start 0x0 length 0x80000 00:13:33.352 nvme2n3 : 5.07 1995.20 7.79 0.00 0.00 63463.33 2545.82 112116.97 00:13:33.352 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:33.352 Verification LBA range: start 0x80000 length 0x80000 00:13:33.352 nvme2n3 : 5.04 2056.24 8.03 0.00 0.00 61783.96 8973.39 91952.05 00:13:33.352 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:33.352 Verification LBA range: start 0x0 length 0x20000 00:13:33.352 nvme3n1 : 5.07 2043.61 7.98 0.00 0.00 61841.04 6125.10 112116.97 00:13:33.352 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:33.352 Verification LBA range: start 0x20000 length 0x20000 00:13:33.352 nvme3n1 : 5.06 2101.41 8.21 0.00 0.00 60361.69 5268.09 89128.96 00:13:33.352 [2024-11-21T00:02:23.773Z] =================================================================================================================== 00:13:33.352 [2024-11-21T00:02:23.773Z] Total : 23523.12 91.89 0.00 0.00 64844.05 715.22 187130.49 00:13:33.614 00:13:33.614 real 0m5.866s 00:13:33.614 user 0m8.796s 00:13:33.614 sys 0m1.909s 00:13:33.614 00:02:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.614 ************************************ 00:13:33.614 END TEST bdev_verify 00:13:33.614 ************************************ 00:13:33.614 00:02:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:33.876 00:02:24 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:33.876 00:02:24 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:33.876 00:02:24 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.876 00:02:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.876 ************************************ 00:13:33.876 START TEST bdev_verify_big_io 00:13:33.876 ************************************ 00:13:33.876 00:02:24 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:33.876 [2024-11-21 00:02:24.127485] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:33.876 [2024-11-21 00:02:24.127633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81904 ] 00:13:33.876 [2024-11-21 00:02:24.266254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:34.136 [2024-11-21 00:02:24.317683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.136 [2024-11-21 00:02:24.317805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.397 Running I/O for 5 seconds... 00:13:40.303 1392.00 IOPS, 87.00 MiB/s [2024-11-21T00:02:30.724Z] 2689.50 IOPS, 168.09 MiB/s [2024-11-21T00:02:30.986Z] 2866.33 IOPS, 179.15 MiB/s 00:13:40.565 Latency(us) 00:13:40.565 [2024-11-21T00:02:30.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.565 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:40.565 Verification LBA range: start 0x0 length 0xa000 00:13:40.565 nvme0n1 : 6.04 106.00 6.62 0.00 0.00 1152966.03 229073.53 1477685.56 00:13:40.565 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:40.565 Verification LBA range: start 0xa000 length 0xa000 00:13:40.565 nvme0n1 : 5.86 117.45 7.34 0.00 0.00 1055872.89 137121.48 1961643.72 00:13:40.565 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:40.565 Verification LBA range: start 0x0 length 0xbd0b 00:13:40.565 nvme1n1 : 5.91 119.45 7.47 0.00 0.00 974673.63 21878.94 1245385.65 00:13:40.565 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:40.565 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:40.565 nvme1n1 : 5.88 117.07 7.32 0.00 0.00 1041508.13 7864.32 2387526.89 00:13:40.565 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:40.565 Verification LBA range: start 0x0 length 0x8000 00:13:40.565 nvme2n1 : 5.96 115.39 7.21 0.00 0.00 973670.35 20064.10 1677721.60 00:13:40.565 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:40.565 Verification LBA range: start 0x8000 length 0x8000 00:13:40.565 nvme2n1 : 5.86 144.60 9.04 0.00 0.00 830204.39 27222.65 1109877.37 00:13:40.565 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:40.565 Verification LBA range: start 0x0 length 0x8000 00:13:40.565 nvme2n2 : 6.10 125.90 7.87 0.00 0.00 846208.25 3352.42 877577.45 00:13:40.565 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:40.565 Verification LBA range: start 0x8000 length 0x8000 00:13:40.565 nvme2n2 : 5.87 139.09 8.69 0.00 0.00 848728.58 7360.20 2181038.08 00:13:40.565 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:40.565 Verification LBA range: start 0x0 length 0x8000 00:13:40.565 nvme2n3 : 6.08 144.75 9.05 0.00 0.00 716510.90 10838.65 1910021.51 00:13:40.565 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:40.565 Verification LBA range: start 0x8000 length 0x8000 00:13:40.565 nvme2n3 : 5.87 128.08 8.01 0.00 0.00 901375.65 9628.75 1871304.86 00:13:40.565 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:40.565 Verification LBA range: start 0x0 length 0x2000 00:13:40.565 nvme3n1 : 6.27 214.85 13.43 0.00 0.00 465174.50 190.62 2245565.83 00:13:40.565 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:40.565 Verification LBA range: start 0x2000 length 0x2000 00:13:40.565 nvme3n1 : 5.87 190.83 11.93 0.00 0.00 591712.62 6452.78 787238.60 00:13:40.565 [2024-11-21T00:02:30.986Z] =================================================================================================================== 00:13:40.565 [2024-11-21T00:02:30.986Z] Total : 1663.46 103.97 0.00 0.00 823160.37 190.62 2387526.89 00:13:40.827 00:13:40.827 real 0m7.076s 00:13:40.827 user 0m12.937s 00:13:40.827 sys 0m0.483s 00:13:40.827 ************************************ 00:13:40.827 END TEST bdev_verify_big_io 00:13:40.827 ************************************ 00:13:40.827 00:02:31 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.827 00:02:31 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.827 00:02:31 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:40.827 00:02:31 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:40.827 00:02:31 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.827 00:02:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.827 ************************************ 00:13:40.827 START TEST bdev_write_zeroes 00:13:40.827 ************************************ 00:13:40.827 00:02:31 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:41.089 [2024-11-21 00:02:31.249333] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:41.089 [2024-11-21 00:02:31.249464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82003 ] 00:13:41.089 [2024-11-21 00:02:31.388981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.089 [2024-11-21 00:02:31.437981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.350 Running I/O for 1 seconds... 00:13:42.294 70665.00 IOPS, 276.04 MiB/s 00:13:42.294 Latency(us) 00:13:42.294 [2024-11-21T00:02:32.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.294 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:42.294 nvme0n1 : 1.02 11428.66 44.64 0.00 0.00 11190.23 5620.97 22483.89 00:13:42.294 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:42.294 nvme1n1 : 1.03 13020.56 50.86 0.00 0.00 9761.28 4083.40 22383.06 00:13:42.294 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:42.294 nvme2n1 : 1.03 11721.95 45.79 0.00 0.00 10808.20 3730.51 22988.01 00:13:42.294 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:42.294 nvme2n2 : 1.02 11249.44 43.94 0.00 0.00 11251.55 5041.23 24802.86 00:13:42.294 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:42.294 nvme2n3 : 1.03 11333.96 44.27 0.00 0.00 11158.50 4990.82 20669.05 00:13:42.294 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:42.294 nvme3n1 : 1.03 11321.20 44.22 0.00 0.00 11162.67 5116.85 21273.99 00:13:42.294 [2024-11-21T00:02:32.715Z] =================================================================================================================== 00:13:42.294 [2024-11-21T00:02:32.715Z] Total : 70075.77 273.73 0.00 0.00 10859.96 3730.51 24802.86 00:13:42.554 00:13:42.554 real 0m1.685s 00:13:42.554 user 0m1.022s 00:13:42.554 sys 0m0.480s 00:13:42.554 00:02:32 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.554 ************************************ 00:13:42.554 END TEST bdev_write_zeroes 00:13:42.554 ************************************ 00:13:42.554 00:02:32 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:42.554 00:02:32 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:42.554 00:02:32 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:42.554 00:02:32 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.554 00:02:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:42.554 ************************************ 00:13:42.554 START TEST bdev_json_nonenclosed 00:13:42.554 ************************************ 00:13:42.554 00:02:32 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:42.815 [2024-11-21 00:02:32.975662] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:42.815 [2024-11-21 00:02:32.975773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82046 ] 00:13:42.815 [2024-11-21 00:02:33.113389] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.815 [2024-11-21 00:02:33.145596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.815 [2024-11-21 00:02:33.145683] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:42.815 [2024-11-21 00:02:33.145700] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:42.815 [2024-11-21 00:02:33.145711] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:42.815 00:13:42.815 real 0m0.305s 00:13:42.815 user 0m0.122s 00:13:42.815 sys 0m0.080s 00:13:42.815 00:02:33 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.815 00:02:33 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:42.815 ************************************ 00:13:42.815 END TEST bdev_json_nonenclosed 00:13:42.815 ************************************ 00:13:43.076 00:02:33 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:43.076 00:02:33 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:43.076 00:02:33 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.076 00:02:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.076 ************************************ 00:13:43.076 START TEST bdev_json_nonarray 00:13:43.076 ************************************ 00:13:43.076 00:02:33 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:43.076 [2024-11-21 00:02:33.334916] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:43.076 [2024-11-21 00:02:33.335023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82066 ] 00:13:43.076 [2024-11-21 00:02:33.472866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.337 [2024-11-21 00:02:33.522575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.337 [2024-11-21 00:02:33.522703] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:43.337 [2024-11-21 00:02:33.522721] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:43.337 [2024-11-21 00:02:33.522734] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:43.337 00:13:43.337 real 0m0.353s 00:13:43.337 user 0m0.150s 00:13:43.337 sys 0m0.097s 00:13:43.337 00:02:33 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.337 ************************************ 00:13:43.337 END TEST bdev_json_nonarray 00:13:43.337 ************************************ 00:13:43.337 00:02:33 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:43.337 00:02:33 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:43.337 00:02:33 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:43.337 00:02:33 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:43.337 00:02:33 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:43.337 00:02:33 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:43.337 00:02:33 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:43.337 00:02:33 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:43.337 00:02:33 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:43.337 00:02:33 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:43.337 00:02:33 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:43.337 00:02:33 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:43.337 00:02:33 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:43.909 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:44.853 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:44.853 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:44.853 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:46.770 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:46.770 00:13:46.770 real 0m49.611s 00:13:46.770 user 1m15.043s 00:13:46.770 sys 0m34.465s 00:13:46.770 00:02:37 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.770 ************************************ 00:13:46.770 END TEST blockdev_xnvme 00:13:46.770 ************************************ 00:13:46.770 00:02:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.770 00:02:37 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:46.770 00:02:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:46.770 00:02:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:46.770 00:02:37 -- common/autotest_common.sh@10 -- # set +x 00:13:46.770 ************************************ 00:13:46.770 START TEST ublk 00:13:46.770 ************************************ 00:13:46.770 00:02:37 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:46.770 * Looking for test storage... 00:13:46.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:46.770 00:02:37 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:46.770 00:02:37 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:46.770 00:02:37 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:47.033 00:02:37 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:47.033 00:02:37 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:47.033 00:02:37 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:47.033 00:02:37 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:47.033 00:02:37 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.033 00:02:37 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:47.033 00:02:37 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:47.033 00:02:37 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:47.033 00:02:37 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:47.033 00:02:37 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:47.033 00:02:37 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:47.033 00:02:37 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:47.033 00:02:37 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:47.033 00:02:37 ublk -- scripts/common.sh@345 -- # : 1 00:13:47.033 00:02:37 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:47.033 00:02:37 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.033 00:02:37 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:47.033 00:02:37 ublk -- scripts/common.sh@353 -- # local d=1 00:13:47.033 00:02:37 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.033 00:02:37 ublk -- scripts/common.sh@355 -- # echo 1 00:13:47.033 00:02:37 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:47.033 00:02:37 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:47.033 00:02:37 ublk -- scripts/common.sh@353 -- # local d=2 00:13:47.033 00:02:37 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.033 00:02:37 ublk -- scripts/common.sh@355 -- # echo 2 00:13:47.033 00:02:37 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:47.033 00:02:37 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:47.033 00:02:37 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:47.033 00:02:37 ublk -- scripts/common.sh@368 -- # return 0 00:13:47.033 00:02:37 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.033 00:02:37 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:47.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.033 --rc genhtml_branch_coverage=1 00:13:47.033 --rc genhtml_function_coverage=1 00:13:47.033 --rc genhtml_legend=1 00:13:47.033 --rc geninfo_all_blocks=1 00:13:47.033 --rc geninfo_unexecuted_blocks=1 00:13:47.033 00:13:47.033 ' 00:13:47.033 00:02:37 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:47.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.033 --rc genhtml_branch_coverage=1 00:13:47.033 --rc genhtml_function_coverage=1 00:13:47.033 --rc genhtml_legend=1 00:13:47.033 --rc geninfo_all_blocks=1 00:13:47.033 --rc geninfo_unexecuted_blocks=1 00:13:47.033 00:13:47.033 ' 00:13:47.033 00:02:37 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:47.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.033 --rc genhtml_branch_coverage=1 00:13:47.033 --rc genhtml_function_coverage=1 00:13:47.033 --rc genhtml_legend=1 00:13:47.033 --rc geninfo_all_blocks=1 00:13:47.033 --rc geninfo_unexecuted_blocks=1 00:13:47.033 00:13:47.033 ' 00:13:47.033 00:02:37 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:47.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.033 --rc genhtml_branch_coverage=1 00:13:47.033 --rc genhtml_function_coverage=1 00:13:47.033 --rc genhtml_legend=1 00:13:47.033 --rc geninfo_all_blocks=1 00:13:47.033 --rc geninfo_unexecuted_blocks=1 00:13:47.033 00:13:47.033 ' 00:13:47.033 00:02:37 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:47.033 00:02:37 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:47.033 00:02:37 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:47.033 00:02:37 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:47.033 00:02:37 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:47.033 00:02:37 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:47.033 00:02:37 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:47.033 00:02:37 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:47.033 00:02:37 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:47.033 00:02:37 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:47.033 00:02:37 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:47.033 00:02:37 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:47.033 00:02:37 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:47.033 00:02:37 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:47.033 00:02:37 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:47.033 00:02:37 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:47.033 00:02:37 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:47.033 00:02:37 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:47.033 00:02:37 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:47.033 00:02:37 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:47.033 00:02:37 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:47.033 00:02:37 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.033 00:02:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:47.033 ************************************ 00:13:47.033 START TEST test_save_ublk_config 00:13:47.033 ************************************ 00:13:47.033 00:02:37 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:47.033 00:02:37 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:47.033 00:02:37 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82359 00:13:47.034 00:02:37 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:47.034 00:02:37 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82359 00:13:47.034 00:02:37 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82359 ']' 00:13:47.034 00:02:37 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.034 00:02:37 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:47.034 00:02:37 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.034 00:02:37 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:47.034 00:02:37 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:47.034 00:02:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:47.034 [2024-11-21 00:02:37.359540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:47.034 [2024-11-21 00:02:37.360183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82359 ] 00:13:47.295 [2024-11-21 00:02:37.492380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.295 [2024-11-21 00:02:37.545391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.868 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:47.868 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:47.868 00:02:38 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:47.868 00:02:38 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:47.868 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.868 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:47.869 [2024-11-21 00:02:38.219317] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:47.869 [2024-11-21 00:02:38.219670] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:47.869 malloc0 00:13:47.869 [2024-11-21 00:02:38.251460] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:47.869 [2024-11-21 00:02:38.251542] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:47.869 [2024-11-21 00:02:38.251551] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:47.869 [2024-11-21 00:02:38.251564] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:47.869 [2024-11-21 00:02:38.259498] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:47.869 [2024-11-21 00:02:38.259537] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:47.869 [2024-11-21 00:02:38.267329] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:47.869 [2024-11-21 00:02:38.267445] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:47.869 [2024-11-21 00:02:38.284322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:48.130 0 00:13:48.130 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.130 00:02:38 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:48.130 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.130 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:48.392 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.392 00:02:38 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:48.392 "subsystems": [ 00:13:48.392 { 00:13:48.392 "subsystem": "fsdev", 00:13:48.392 "config": [ 00:13:48.392 { 00:13:48.392 "method": "fsdev_set_opts", 00:13:48.392 "params": { 00:13:48.392 "fsdev_io_pool_size": 65535, 00:13:48.392 "fsdev_io_cache_size": 256 00:13:48.392 } 00:13:48.392 } 00:13:48.392 ] 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "subsystem": "keyring", 00:13:48.392 "config": [] 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "subsystem": "iobuf", 00:13:48.392 "config": [ 00:13:48.392 { 00:13:48.392 "method": "iobuf_set_options", 00:13:48.392 "params": { 00:13:48.392 "small_pool_count": 8192, 00:13:48.392 "large_pool_count": 1024, 00:13:48.392 "small_bufsize": 8192, 00:13:48.392 "large_bufsize": 135168 00:13:48.392 } 00:13:48.392 } 00:13:48.392 ] 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "subsystem": "sock", 00:13:48.392 "config": [ 00:13:48.392 { 00:13:48.392 "method": "sock_set_default_impl", 00:13:48.392 "params": { 00:13:48.392 "impl_name": "posix" 00:13:48.392 } 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "method": "sock_impl_set_options", 00:13:48.392 "params": { 00:13:48.392 "impl_name": "ssl", 00:13:48.392 "recv_buf_size": 4096, 00:13:48.392 "send_buf_size": 4096, 00:13:48.392 "enable_recv_pipe": true, 00:13:48.392 "enable_quickack": false, 00:13:48.392 "enable_placement_id": 0, 00:13:48.392 "enable_zerocopy_send_server": true, 00:13:48.392 "enable_zerocopy_send_client": false, 00:13:48.392 "zerocopy_threshold": 0, 00:13:48.392 "tls_version": 0, 00:13:48.392 "enable_ktls": false 00:13:48.392 } 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "method": "sock_impl_set_options", 00:13:48.392 "params": { 00:13:48.392 "impl_name": "posix", 00:13:48.392 "recv_buf_size": 2097152, 00:13:48.392 "send_buf_size": 2097152, 00:13:48.392 "enable_recv_pipe": true, 00:13:48.392 "enable_quickack": false, 00:13:48.392 "enable_placement_id": 0, 00:13:48.392 "enable_zerocopy_send_server": true, 00:13:48.392 "enable_zerocopy_send_client": false, 00:13:48.392 "zerocopy_threshold": 0, 00:13:48.392 "tls_version": 0, 00:13:48.392 "enable_ktls": false 00:13:48.392 } 00:13:48.392 } 00:13:48.392 ] 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "subsystem": "vmd", 00:13:48.392 "config": [] 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "subsystem": "accel", 00:13:48.392 "config": [ 00:13:48.392 { 00:13:48.392 "method": "accel_set_options", 00:13:48.392 "params": { 00:13:48.392 "small_cache_size": 128, 00:13:48.392 "large_cache_size": 16, 00:13:48.392 "task_count": 2048, 00:13:48.392 "sequence_count": 2048, 00:13:48.392 "buf_count": 2048 00:13:48.392 } 00:13:48.392 } 00:13:48.392 ] 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "subsystem": "bdev", 00:13:48.392 "config": [ 00:13:48.392 { 00:13:48.392 "method": "bdev_set_options", 00:13:48.392 "params": { 00:13:48.392 "bdev_io_pool_size": 65535, 00:13:48.392 "bdev_io_cache_size": 256, 00:13:48.392 "bdev_auto_examine": true, 00:13:48.392 "iobuf_small_cache_size": 128, 00:13:48.392 "iobuf_large_cache_size": 16 00:13:48.392 } 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "method": "bdev_raid_set_options", 00:13:48.392 "params": { 00:13:48.392 "process_window_size_kb": 1024, 00:13:48.392 "process_max_bandwidth_mb_sec": 0 00:13:48.392 } 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "method": "bdev_iscsi_set_options", 00:13:48.392 "params": { 00:13:48.392 "timeout_sec": 30 00:13:48.392 } 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "method": "bdev_nvme_set_options", 00:13:48.392 "params": { 00:13:48.392 "action_on_timeout": "none", 00:13:48.392 "timeout_us": 0, 00:13:48.392 "timeout_admin_us": 0, 00:13:48.392 "keep_alive_timeout_ms": 10000, 00:13:48.392 "arbitration_burst": 0, 00:13:48.392 "low_priority_weight": 0, 00:13:48.392 "medium_priority_weight": 0, 00:13:48.392 "high_priority_weight": 0, 00:13:48.392 "nvme_adminq_poll_period_us": 10000, 00:13:48.392 "nvme_ioq_poll_period_us": 0, 00:13:48.392 "io_queue_requests": 0, 00:13:48.392 "delay_cmd_submit": true, 00:13:48.392 "transport_retry_count": 4, 00:13:48.392 "bdev_retry_count": 3, 00:13:48.392 "transport_ack_timeout": 0, 00:13:48.392 "ctrlr_loss_timeout_sec": 0, 00:13:48.392 "reconnect_delay_sec": 0, 00:13:48.392 "fast_io_fail_timeout_sec": 0, 00:13:48.392 "disable_auto_failback": false, 00:13:48.392 "generate_uuids": false, 00:13:48.392 "transport_tos": 0, 00:13:48.392 "nvme_error_stat": false, 00:13:48.392 "rdma_srq_size": 0, 00:13:48.392 "io_path_stat": false, 00:13:48.392 "allow_accel_sequence": false, 00:13:48.392 "rdma_max_cq_size": 0, 00:13:48.392 "rdma_cm_event_timeout_ms": 0, 00:13:48.392 "dhchap_digests": [ 00:13:48.392 "sha256", 00:13:48.392 "sha384", 00:13:48.392 "sha512" 00:13:48.392 ], 00:13:48.392 "dhchap_dhgroups": [ 00:13:48.392 "null", 00:13:48.392 "ffdhe2048", 00:13:48.392 "ffdhe3072", 00:13:48.392 "ffdhe4096", 00:13:48.392 "ffdhe6144", 00:13:48.392 "ffdhe8192" 00:13:48.392 ] 00:13:48.392 } 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "method": "bdev_nvme_set_hotplug", 00:13:48.392 "params": { 00:13:48.392 "period_us": 100000, 00:13:48.392 "enable": false 00:13:48.392 } 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "method": "bdev_malloc_create", 00:13:48.392 "params": { 00:13:48.392 "name": "malloc0", 00:13:48.392 "num_blocks": 8192, 00:13:48.392 "block_size": 4096, 00:13:48.392 "physical_block_size": 4096, 00:13:48.392 "uuid": "728e89d0-861a-476d-b575-5d6d0afd1eee", 00:13:48.392 "optimal_io_boundary": 0, 00:13:48.392 "md_size": 0, 00:13:48.392 "dif_type": 0, 00:13:48.392 "dif_is_head_of_md": false, 00:13:48.392 "dif_pi_format": 0 00:13:48.392 } 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "method": "bdev_wait_for_examine" 00:13:48.392 } 00:13:48.392 ] 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "subsystem": "scsi", 00:13:48.392 "config": null 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "subsystem": "scheduler", 00:13:48.392 "config": [ 00:13:48.392 { 00:13:48.392 "method": "framework_set_scheduler", 00:13:48.392 "params": { 00:13:48.392 "name": "static" 00:13:48.392 } 00:13:48.392 } 00:13:48.392 ] 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "subsystem": "vhost_scsi", 00:13:48.392 "config": [] 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "subsystem": "vhost_blk", 00:13:48.392 "config": [] 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "subsystem": "ublk", 00:13:48.392 "config": [ 00:13:48.392 { 00:13:48.392 "method": "ublk_create_target", 00:13:48.392 "params": { 00:13:48.392 "cpumask": "1" 00:13:48.392 } 00:13:48.392 }, 00:13:48.392 { 00:13:48.392 "method": "ublk_start_disk", 00:13:48.392 "params": { 00:13:48.392 "bdev_name": "malloc0", 00:13:48.392 "ublk_id": 0, 00:13:48.392 "num_queues": 1, 00:13:48.393 "queue_depth": 128 00:13:48.393 } 00:13:48.393 } 00:13:48.393 ] 00:13:48.393 }, 00:13:48.393 { 00:13:48.393 "subsystem": "nbd", 00:13:48.393 "config": [] 00:13:48.393 }, 00:13:48.393 { 00:13:48.393 "subsystem": "nvmf", 00:13:48.393 "config": [ 00:13:48.393 { 00:13:48.393 "method": "nvmf_set_config", 00:13:48.393 "params": { 00:13:48.393 "discovery_filter": "match_any", 00:13:48.393 "admin_cmd_passthru": { 00:13:48.393 "identify_ctrlr": false 00:13:48.393 }, 00:13:48.393 "dhchap_digests": [ 00:13:48.393 "sha256", 00:13:48.393 "sha384", 00:13:48.393 "sha512" 00:13:48.393 ], 00:13:48.393 "dhchap_dhgroups": [ 00:13:48.393 "null", 00:13:48.393 "ffdhe2048", 00:13:48.393 "ffdhe3072", 00:13:48.393 "ffdhe4096", 00:13:48.393 "ffdhe6144", 00:13:48.393 "ffdhe8192" 00:13:48.393 ] 00:13:48.393 } 00:13:48.393 }, 00:13:48.393 { 00:13:48.393 "method": "nvmf_set_max_subsystems", 00:13:48.393 "params": { 00:13:48.393 "max_subsystems": 1024 00:13:48.393 } 00:13:48.393 }, 00:13:48.393 { 00:13:48.393 "method": "nvmf_set_crdt", 00:13:48.393 "params": { 00:13:48.393 "crdt1": 0, 00:13:48.393 "crdt2": 0, 00:13:48.393 "crdt3": 0 00:13:48.393 } 00:13:48.393 } 00:13:48.393 ] 00:13:48.393 }, 00:13:48.393 { 00:13:48.393 "subsystem": "iscsi", 00:13:48.393 "config": [ 00:13:48.393 { 00:13:48.393 "method": "iscsi_set_options", 00:13:48.393 "params": { 00:13:48.393 "node_base": "iqn.2016-06.io.spdk", 00:13:48.393 "max_sessions": 128, 00:13:48.393 "max_connections_per_session": 2, 00:13:48.393 "max_queue_depth": 64, 00:13:48.393 "default_time2wait": 2, 00:13:48.393 "default_time2retain": 20, 00:13:48.393 "first_burst_length": 8192, 00:13:48.393 "immediate_data": true, 00:13:48.393 "allow_duplicated_isid": false, 00:13:48.393 "error_recovery_level": 0, 00:13:48.393 "nop_timeout": 60, 00:13:48.393 "nop_in_interval": 30, 00:13:48.393 "disable_chap": false, 00:13:48.393 "require_chap": false, 00:13:48.393 "mutual_chap": false, 00:13:48.393 "chap_group": 0, 00:13:48.393 "max_large_datain_per_connection": 64, 00:13:48.393 "max_r2t_per_connection": 4, 00:13:48.393 "pdu_pool_size": 36864, 00:13:48.393 "immediate_data_pool_size": 16384, 00:13:48.393 "data_out_pool_size": 2048 00:13:48.393 } 00:13:48.393 } 00:13:48.393 ] 00:13:48.393 } 00:13:48.393 ] 00:13:48.393 }' 00:13:48.393 00:02:38 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82359 00:13:48.393 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82359 ']' 00:13:48.393 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82359 00:13:48.393 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:48.393 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:48.393 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82359 00:13:48.393 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:48.393 killing process with pid 82359 00:13:48.393 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:48.393 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82359' 00:13:48.393 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82359 00:13:48.393 00:02:38 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82359 00:13:48.655 [2024-11-21 00:02:38.865995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:48.655 [2024-11-21 00:02:38.902336] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:48.655 [2024-11-21 00:02:38.902492] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:48.655 [2024-11-21 00:02:38.910342] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:48.655 [2024-11-21 00:02:38.910407] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:48.655 [2024-11-21 00:02:38.910415] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:48.655 [2024-11-21 00:02:38.910450] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:48.655 [2024-11-21 00:02:38.910603] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:49.233 00:02:39 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82402 00:13:49.233 00:02:39 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82402 00:13:49.233 00:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82402 ']' 00:13:49.233 00:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.233 00:02:39 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:49.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.233 00:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:49.233 00:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.233 00:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:49.233 00:02:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:49.233 00:02:39 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:49.233 "subsystems": [ 00:13:49.233 { 00:13:49.233 "subsystem": "fsdev", 00:13:49.233 "config": [ 00:13:49.233 { 00:13:49.233 "method": "fsdev_set_opts", 00:13:49.233 "params": { 00:13:49.233 "fsdev_io_pool_size": 65535, 00:13:49.233 "fsdev_io_cache_size": 256 00:13:49.233 } 00:13:49.233 } 00:13:49.233 ] 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "subsystem": "keyring", 00:13:49.233 "config": [] 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "subsystem": "iobuf", 00:13:49.233 "config": [ 00:13:49.233 { 00:13:49.233 "method": "iobuf_set_options", 00:13:49.233 "params": { 00:13:49.233 "small_pool_count": 8192, 00:13:49.233 "large_pool_count": 1024, 00:13:49.233 "small_bufsize": 8192, 00:13:49.233 "large_bufsize": 135168 00:13:49.233 } 00:13:49.233 } 00:13:49.233 ] 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "subsystem": "sock", 00:13:49.233 "config": [ 00:13:49.233 { 00:13:49.233 "method": "sock_set_default_impl", 00:13:49.233 "params": { 00:13:49.233 "impl_name": "posix" 00:13:49.233 } 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "method": "sock_impl_set_options", 00:13:49.233 "params": { 00:13:49.233 "impl_name": "ssl", 00:13:49.233 "recv_buf_size": 4096, 00:13:49.233 "send_buf_size": 4096, 00:13:49.233 "enable_recv_pipe": true, 00:13:49.233 "enable_quickack": false, 00:13:49.233 "enable_placement_id": 0, 00:13:49.233 "enable_zerocopy_send_server": true, 00:13:49.233 "enable_zerocopy_send_client": false, 00:13:49.233 "zerocopy_threshold": 0, 00:13:49.233 "tls_version": 0, 00:13:49.233 "enable_ktls": false 00:13:49.233 } 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "method": "sock_impl_set_options", 00:13:49.233 "params": { 00:13:49.233 "impl_name": "posix", 00:13:49.233 "recv_buf_size": 2097152, 00:13:49.233 "send_buf_size": 2097152, 00:13:49.233 "enable_recv_pipe": true, 00:13:49.233 "enable_quickack": false, 00:13:49.233 "enable_placement_id": 0, 00:13:49.233 "enable_zerocopy_send_server": true, 00:13:49.233 "enable_zerocopy_send_client": false, 00:13:49.233 "zerocopy_threshold": 0, 00:13:49.233 "tls_version": 0, 00:13:49.233 "enable_ktls": false 00:13:49.233 } 00:13:49.233 } 00:13:49.233 ] 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "subsystem": "vmd", 00:13:49.233 "config": [] 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "subsystem": "accel", 00:13:49.233 "config": [ 00:13:49.233 { 00:13:49.233 "method": "accel_set_options", 00:13:49.233 "params": { 00:13:49.233 "small_cache_size": 128, 00:13:49.233 "large_cache_size": 16, 00:13:49.233 "task_count": 2048, 00:13:49.233 "sequence_count": 2048, 00:13:49.233 "buf_count": 2048 00:13:49.233 } 00:13:49.233 } 00:13:49.233 ] 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "subsystem": "bdev", 00:13:49.233 "config": [ 00:13:49.233 { 00:13:49.233 "method": "bdev_set_options", 00:13:49.233 "params": { 00:13:49.233 "bdev_io_pool_size": 65535, 00:13:49.233 "bdev_io_cache_size": 256, 00:13:49.233 "bdev_auto_examine": true, 00:13:49.233 "iobuf_small_cache_size": 128, 00:13:49.233 "iobuf_large_cache_size": 16 00:13:49.233 } 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "method": "bdev_raid_set_options", 00:13:49.233 "params": { 00:13:49.233 "process_window_size_kb": 1024, 00:13:49.233 "process_max_bandwidth_mb_sec": 0 00:13:49.233 } 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "method": "bdev_iscsi_set_options", 00:13:49.233 "params": { 00:13:49.233 "timeout_sec": 30 00:13:49.233 } 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "method": "bdev_nvme_set_options", 00:13:49.233 "params": { 00:13:49.233 "action_on_timeout": "none", 00:13:49.233 "timeout_us": 0, 00:13:49.233 "timeout_admin_us": 0, 00:13:49.233 "keep_alive_timeout_ms": 10000, 00:13:49.233 "arbitration_burst": 0, 00:13:49.233 "low_priority_weight": 0, 00:13:49.233 "medium_priority_weight": 0, 00:13:49.233 "high_priority_weight": 0, 00:13:49.233 "nvme_adminq_poll_period_us": 10000, 00:13:49.233 "nvme_ioq_poll_period_us": 0, 00:13:49.233 "io_queue_requests": 0, 00:13:49.233 "delay_cmd_submit": true, 00:13:49.233 "transport_retry_count": 4, 00:13:49.233 "bdev_retry_count": 3, 00:13:49.233 "transport_ack_timeout": 0, 00:13:49.233 "ctrlr_loss_timeout_sec": 0, 00:13:49.233 "reconnect_delay_sec": 0, 00:13:49.233 "fast_io_fail_timeout_sec": 0, 00:13:49.233 "disable_auto_failback": false, 00:13:49.233 "generate_uuids": false, 00:13:49.233 "transport_tos": 0, 00:13:49.233 "nvme_error_stat": false, 00:13:49.233 "rdma_srq_size": 0, 00:13:49.233 "io_path_stat": false, 00:13:49.233 "allow_accel_sequence": false, 00:13:49.233 "rdma_max_cq_size": 0, 00:13:49.233 "rdma_cm_event_timeout_ms": 0, 00:13:49.233 "dhchap_digests": [ 00:13:49.233 "sha256", 00:13:49.233 "sha384", 00:13:49.233 "sha512" 00:13:49.233 ], 00:13:49.233 "dhchap_dhgroups": [ 00:13:49.233 "null", 00:13:49.233 "ffdhe2048", 00:13:49.233 "ffdhe3072", 00:13:49.233 "ffdhe4096", 00:13:49.233 "ffdhe6144", 00:13:49.233 "ffdhe8192" 00:13:49.233 ] 00:13:49.233 } 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "method": "bdev_nvme_set_hotplug", 00:13:49.233 "params": { 00:13:49.233 "period_us": 100000, 00:13:49.233 "enable": false 00:13:49.233 } 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "method": "bdev_malloc_create", 00:13:49.233 "params": { 00:13:49.233 "name": "malloc0", 00:13:49.233 "num_blocks": 8192, 00:13:49.233 "block_size": 4096, 00:13:49.233 "physical_block_size": 4096, 00:13:49.233 "uuid": "728e89d0-861a-476d-b575-5d6d0afd1eee", 00:13:49.233 "optimal_io_boundary": 0, 00:13:49.233 "md_size": 0, 00:13:49.233 "dif_type": 0, 00:13:49.233 "dif_is_head_of_md": false, 00:13:49.233 "dif_pi_format": 0 00:13:49.233 } 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "method": "bdev_wait_for_examine" 00:13:49.233 } 00:13:49.233 ] 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "subsystem": "scsi", 00:13:49.233 "config": null 00:13:49.233 }, 00:13:49.233 { 00:13:49.233 "subsystem": "scheduler", 00:13:49.233 "config": [ 00:13:49.233 { 00:13:49.233 "method": "framework_set_scheduler", 00:13:49.234 "params": { 00:13:49.234 "name": "static" 00:13:49.234 } 00:13:49.234 } 00:13:49.234 ] 00:13:49.234 }, 00:13:49.234 { 00:13:49.234 "subsystem": "vhost_scsi", 00:13:49.234 "config": [] 00:13:49.234 }, 00:13:49.234 { 00:13:49.234 "subsystem": "vhost_blk", 00:13:49.234 "config": [] 00:13:49.234 }, 00:13:49.234 { 00:13:49.234 "subsystem": "ublk", 00:13:49.234 "config": [ 00:13:49.234 { 00:13:49.234 "method": "ublk_create_target", 00:13:49.234 "params": { 00:13:49.234 "cpumask": "1" 00:13:49.234 } 00:13:49.234 }, 00:13:49.234 { 00:13:49.234 "method": "ublk_start_disk", 00:13:49.234 "params": { 00:13:49.234 "bdev_name": "malloc0", 00:13:49.234 "ublk_id": 0, 00:13:49.234 "num_queues": 1, 00:13:49.234 "queue_depth": 128 00:13:49.234 } 00:13:49.234 } 00:13:49.234 ] 00:13:49.234 }, 00:13:49.234 { 00:13:49.234 "subsystem": "nbd", 00:13:49.234 "config": [] 00:13:49.234 }, 00:13:49.234 { 00:13:49.234 "subsystem": "nvmf", 00:13:49.234 "config": [ 00:13:49.234 { 00:13:49.234 "method": "nvmf_set_config", 00:13:49.234 "params": { 00:13:49.234 "discovery_filter": "match_any", 00:13:49.234 "admin_cmd_passthru": { 00:13:49.234 "identify_ctrlr": false 00:13:49.234 }, 00:13:49.234 "dhchap_digests": [ 00:13:49.234 "sha256", 00:13:49.234 "sha384", 00:13:49.234 "sha512" 00:13:49.234 ], 00:13:49.234 "dhchap_dhgroups": [ 00:13:49.234 "null", 00:13:49.234 "ffdhe2048", 00:13:49.234 "ffdhe3072", 00:13:49.234 "ffdhe4096", 00:13:49.234 "ffdhe6144", 00:13:49.234 "ffdhe8192" 00:13:49.234 ] 00:13:49.234 } 00:13:49.234 }, 00:13:49.234 { 00:13:49.234 "method": "nvmf_set_max_subsystems", 00:13:49.234 "params": { 00:13:49.234 "max_subsystems": 1024 00:13:49.234 } 00:13:49.234 }, 00:13:49.234 { 00:13:49.234 "method": "nvmf_set_crdt", 00:13:49.234 "params": { 00:13:49.234 "crdt1": 0, 00:13:49.234 "crdt2": 0, 00:13:49.234 "crdt3": 0 00:13:49.234 } 00:13:49.234 } 00:13:49.234 ] 00:13:49.234 }, 00:13:49.234 { 00:13:49.234 "subsystem": "iscsi", 00:13:49.234 "config": [ 00:13:49.234 { 00:13:49.234 "method": "iscsi_set_options", 00:13:49.234 "params": { 00:13:49.234 "node_base": "iqn.2016-06.io.spdk", 00:13:49.234 "max_sessions": 128, 00:13:49.234 "max_connections_per_session": 2, 00:13:49.234 "max_queue_depth": 64, 00:13:49.234 "default_time2wait": 2, 00:13:49.234 "default_time2retain": 20, 00:13:49.234 "first_burst_length": 8192, 00:13:49.234 "immediate_data": true, 00:13:49.234 "allow_duplicated_isid": false, 00:13:49.234 "error_recovery_level": 0, 00:13:49.234 "nop_timeout": 60, 00:13:49.234 "nop_in_interval": 30, 00:13:49.234 "disable_chap": false, 00:13:49.234 "require_chap": false, 00:13:49.234 "mutual_chap": false, 00:13:49.234 "chap_group": 0, 00:13:49.234 "max_large_datain_per_connection": 64, 00:13:49.234 "max_r2t_per_connection": 4, 00:13:49.234 "pdu_pool_size": 36864, 00:13:49.234 "immediate_data_pool_size": 16384, 00:13:49.234 "data_out_pool_size": 2048 00:13:49.234 } 00:13:49.234 } 00:13:49.234 ] 00:13:49.234 } 00:13:49.234 ] 00:13:49.234 }' 00:13:49.234 [2024-11-21 00:02:39.459633] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:49.234 [2024-11-21 00:02:39.459758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82402 ] 00:13:49.234 [2024-11-21 00:02:39.596748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.495 [2024-11-21 00:02:39.652450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.756 [2024-11-21 00:02:40.026315] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:49.756 [2024-11-21 00:02:40.026581] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:49.756 [2024-11-21 00:02:40.034425] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:49.756 [2024-11-21 00:02:40.034487] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:49.756 [2024-11-21 00:02:40.034495] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:49.756 [2024-11-21 00:02:40.034502] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:49.756 [2024-11-21 00:02:40.043384] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:49.756 [2024-11-21 00:02:40.043405] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:49.756 [2024-11-21 00:02:40.050324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:49.756 [2024-11-21 00:02:40.050416] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:49.756 [2024-11-21 00:02:40.067316] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82402 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82402 ']' 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82402 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82402 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:50.016 killing process with pid 82402 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82402' 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82402 00:13:50.016 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82402 00:13:50.278 [2024-11-21 00:02:40.562102] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:50.278 [2024-11-21 00:02:40.594379] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:50.278 [2024-11-21 00:02:40.594530] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:50.278 [2024-11-21 00:02:40.601324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:50.278 [2024-11-21 00:02:40.601376] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:50.278 [2024-11-21 00:02:40.601390] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:50.278 [2024-11-21 00:02:40.601421] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:50.278 [2024-11-21 00:02:40.601557] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:50.850 00:02:40 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:50.850 00:13:50.850 real 0m3.706s 00:13:50.850 user 0m2.584s 00:13:50.850 sys 0m1.763s 00:13:50.850 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.850 ************************************ 00:13:50.850 END TEST test_save_ublk_config 00:13:50.850 ************************************ 00:13:50.850 00:02:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:50.850 00:02:41 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82448 00:13:50.850 00:02:41 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:50.850 00:02:41 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82448 00:13:50.850 00:02:41 ublk -- common/autotest_common.sh@831 -- # '[' -z 82448 ']' 00:13:50.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.850 00:02:41 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:50.850 00:02:41 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.850 00:02:41 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.850 00:02:41 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.850 00:02:41 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.850 00:02:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.850 [2024-11-21 00:02:41.110446] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:50.850 [2024-11-21 00:02:41.110598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82448 ] 00:13:50.850 [2024-11-21 00:02:41.246996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:51.111 [2024-11-21 00:02:41.300088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.111 [2024-11-21 00:02:41.300146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.683 00:02:41 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.683 00:02:41 ublk -- common/autotest_common.sh@864 -- # return 0 00:13:51.683 00:02:41 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:51.683 00:02:41 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:51.683 00:02:41 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.683 00:02:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.684 ************************************ 00:13:51.684 START TEST test_create_ublk 00:13:51.684 ************************************ 00:13:51.684 00:02:41 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:13:51.684 00:02:41 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:51.684 00:02:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.684 00:02:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.684 [2024-11-21 00:02:41.986333] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:51.684 [2024-11-21 00:02:41.988801] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:51.684 00:02:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.684 00:02:41 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:51.684 00:02:41 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:51.684 00:02:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.684 00:02:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.945 00:02:42 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:51.945 00:02:42 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.945 00:02:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.945 [2024-11-21 00:02:42.108526] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:51.945 [2024-11-21 00:02:42.109079] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:51.945 [2024-11-21 00:02:42.109102] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:51.945 [2024-11-21 00:02:42.109114] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:51.945 [2024-11-21 00:02:42.116354] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:51.945 [2024-11-21 00:02:42.116406] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:51.945 [2024-11-21 00:02:42.124343] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:51.945 [2024-11-21 00:02:42.125155] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:51.945 [2024-11-21 00:02:42.155348] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:51.945 00:02:42 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:51.945 00:02:42 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.945 00:02:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.945 00:02:42 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:51.945 { 00:13:51.945 "ublk_device": "/dev/ublkb0", 00:13:51.945 "id": 0, 00:13:51.945 "queue_depth": 512, 00:13:51.945 "num_queues": 4, 00:13:51.945 "bdev_name": "Malloc0" 00:13:51.945 } 00:13:51.945 ]' 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:51.945 00:02:42 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:51.945 00:02:42 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:51.945 00:02:42 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:51.945 00:02:42 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:51.945 00:02:42 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:51.945 00:02:42 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:51.945 00:02:42 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:51.945 00:02:42 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:51.945 00:02:42 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:51.945 00:02:42 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:51.945 00:02:42 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:51.945 00:02:42 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:52.207 fio: verification read phase will never start because write phase uses all of runtime 00:13:52.207 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:52.207 fio-3.35 00:13:52.207 Starting 1 process 00:14:02.195 00:14:02.195 fio_test: (groupid=0, jobs=1): err= 0: pid=82496: Thu Nov 21 00:02:52 2024 00:14:02.195 write: IOPS=13.2k, BW=51.6MiB/s (54.1MB/s)(516MiB/10001msec); 0 zone resets 00:14:02.195 clat (usec): min=47, max=8021, avg=74.90, stdev=139.18 00:14:02.195 lat (usec): min=47, max=8024, avg=75.35, stdev=139.25 00:14:02.195 clat percentiles (usec): 00:14:02.195 | 1.00th=[ 53], 5.00th=[ 56], 10.00th=[ 57], 20.00th=[ 59], 00:14:02.195 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 66], 60.00th=[ 68], 00:14:02.195 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 80], 95.00th=[ 88], 00:14:02.195 | 99.00th=[ 178], 99.50th=[ 273], 99.90th=[ 3195], 99.95th=[ 3621], 00:14:02.195 | 99.99th=[ 4047] 00:14:02.195 bw ( KiB/s): min= 9056, max=61760, per=99.28%, avg=52464.42, stdev=13312.56, samples=19 00:14:02.195 iops : min= 2264, max=15440, avg=13116.11, stdev=3328.14, samples=19 00:14:02.195 lat (usec) : 50=0.03%, 100=96.27%, 250=2.99%, 500=0.47%, 750=0.01% 00:14:02.195 lat (usec) : 1000=0.01% 00:14:02.195 lat (msec) : 2=0.06%, 4=0.15%, 10=0.01% 00:14:02.195 cpu : usr=2.03%, sys=11.45%, ctx=132120, majf=0, minf=797 00:14:02.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:02.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.196 issued rwts: total=0,132118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.196 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:02.196 00:14:02.196 Run status group 0 (all jobs): 00:14:02.196 WRITE: bw=51.6MiB/s (54.1MB/s), 51.6MiB/s-51.6MiB/s (54.1MB/s-54.1MB/s), io=516MiB (541MB), run=10001-10001msec 00:14:02.196 00:14:02.196 Disk stats (read/write): 00:14:02.196 ublkb0: ios=0/130532, merge=0/0, ticks=0/8355, in_queue=8355, util=99.08% 00:14:02.196 00:02:52 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:02.196 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.196 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.196 [2024-11-21 00:02:52.581146] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:02.454 [2024-11-21 00:02:52.618909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:02.454 [2024-11-21 00:02:52.619778] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:02.454 [2024-11-21 00:02:52.628322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:02.454 [2024-11-21 00:02:52.628577] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:02.454 [2024-11-21 00:02:52.628596] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:02.454 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.454 00:02:52 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:02.454 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.455 [2024-11-21 00:02:52.644389] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:02.455 request: 00:14:02.455 { 00:14:02.455 "ublk_id": 0, 00:14:02.455 "method": "ublk_stop_disk", 00:14:02.455 "req_id": 1 00:14:02.455 } 00:14:02.455 Got JSON-RPC error response 00:14:02.455 response: 00:14:02.455 { 00:14:02.455 "code": -19, 00:14:02.455 "message": "No such device" 00:14:02.455 } 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.455 00:02:52 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.455 [2024-11-21 00:02:52.660396] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:02.455 [2024-11-21 00:02:52.661735] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:02.455 [2024-11-21 00:02:52.661762] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.455 00:02:52 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.455 00:02:52 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:02.455 00:02:52 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.455 00:02:52 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:02.455 00:02:52 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:02.455 00:02:52 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:02.455 00:02:52 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.455 00:02:52 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:02.455 00:02:52 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:02.455 00:02:52 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:02.455 00:14:02.455 real 0m10.857s 00:14:02.455 user 0m0.509s 00:14:02.455 sys 0m1.217s 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.455 00:02:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.455 ************************************ 00:14:02.455 END TEST test_create_ublk 00:14:02.455 ************************************ 00:14:02.455 00:02:52 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:02.455 00:02:52 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:02.455 00:02:52 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.455 00:02:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.455 ************************************ 00:14:02.455 START TEST test_create_multi_ublk 00:14:02.455 ************************************ 00:14:02.455 00:02:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.714 [2024-11-21 00:02:52.880320] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:02.714 [2024-11-21 00:02:52.881476] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.714 00:02:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.714 [2024-11-21 00:02:52.963753] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:02.714 [2024-11-21 00:02:52.964076] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:02.714 [2024-11-21 00:02:52.964089] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:02.714 [2024-11-21 00:02:52.964095] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:02.714 [2024-11-21 00:02:52.976367] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:02.714 [2024-11-21 00:02:52.976385] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:02.714 [2024-11-21 00:02:52.988320] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:02.714 [2024-11-21 00:02:52.988851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:02.714 [2024-11-21 00:02:53.024323] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:02.714 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.714 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:02.714 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:02.714 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:02.714 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.714 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.714 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.714 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:02.714 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:02.714 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.714 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.714 [2024-11-21 00:02:53.120419] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:02.714 [2024-11-21 00:02:53.120734] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:02.714 [2024-11-21 00:02:53.120746] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:02.714 [2024-11-21 00:02:53.120753] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:02.714 [2024-11-21 00:02:53.132330] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:02.714 [2024-11-21 00:02:53.132353] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:02.973 [2024-11-21 00:02:53.144330] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:02.973 [2024-11-21 00:02:53.144863] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:02.973 [2024-11-21 00:02:53.157338] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.973 [2024-11-21 00:02:53.264422] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:02.973 [2024-11-21 00:02:53.264742] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:02.973 [2024-11-21 00:02:53.264751] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:02.973 [2024-11-21 00:02:53.264756] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:02.973 [2024-11-21 00:02:53.276333] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:02.973 [2024-11-21 00:02:53.276349] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:02.973 [2024-11-21 00:02:53.288336] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:02.973 [2024-11-21 00:02:53.288869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:02.973 [2024-11-21 00:02:53.317336] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.973 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:03.231 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.231 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:03.231 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:03.231 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.231 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:03.231 [2024-11-21 00:02:53.424429] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:03.231 [2024-11-21 00:02:53.424751] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:03.231 [2024-11-21 00:02:53.424764] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:03.231 [2024-11-21 00:02:53.424780] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:03.231 [2024-11-21 00:02:53.436338] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:03.231 [2024-11-21 00:02:53.436360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:03.231 [2024-11-21 00:02:53.448316] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:03.232 [2024-11-21 00:02:53.448849] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:03.232 [2024-11-21 00:02:53.460317] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:03.232 { 00:14:03.232 "ublk_device": "/dev/ublkb0", 00:14:03.232 "id": 0, 00:14:03.232 "queue_depth": 512, 00:14:03.232 "num_queues": 4, 00:14:03.232 "bdev_name": "Malloc0" 00:14:03.232 }, 00:14:03.232 { 00:14:03.232 "ublk_device": "/dev/ublkb1", 00:14:03.232 "id": 1, 00:14:03.232 "queue_depth": 512, 00:14:03.232 "num_queues": 4, 00:14:03.232 "bdev_name": "Malloc1" 00:14:03.232 }, 00:14:03.232 { 00:14:03.232 "ublk_device": "/dev/ublkb2", 00:14:03.232 "id": 2, 00:14:03.232 "queue_depth": 512, 00:14:03.232 "num_queues": 4, 00:14:03.232 "bdev_name": "Malloc2" 00:14:03.232 }, 00:14:03.232 { 00:14:03.232 "ublk_device": "/dev/ublkb3", 00:14:03.232 "id": 3, 00:14:03.232 "queue_depth": 512, 00:14:03.232 "num_queues": 4, 00:14:03.232 "bdev_name": "Malloc3" 00:14:03.232 } 00:14:03.232 ]' 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:03.232 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:03.490 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:03.490 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:03.490 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:03.490 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:03.490 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:03.490 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:03.490 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:03.490 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:03.490 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:03.491 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:03.491 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:03.491 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:03.491 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:03.491 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:03.491 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:03.491 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:03.491 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:03.749 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:03.749 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:03.749 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:03.749 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:03.749 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:03.749 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:03.749 00:02:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:03.749 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:03.749 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:03.749 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:03.749 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:03.749 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:03.749 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:03.749 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:03.749 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:03.749 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:03.749 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:03.749 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:03.749 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.749 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:03.749 [2024-11-21 00:02:54.121394] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:04.008 [2024-11-21 00:02:54.169353] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:04.008 [2024-11-21 00:02:54.170176] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:04.008 [2024-11-21 00:02:54.177332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:04.008 [2024-11-21 00:02:54.177574] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:04.008 [2024-11-21 00:02:54.177586] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.008 [2024-11-21 00:02:54.187393] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:04.008 [2024-11-21 00:02:54.221362] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:04.008 [2024-11-21 00:02:54.222120] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:04.008 [2024-11-21 00:02:54.229320] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:04.008 [2024-11-21 00:02:54.229567] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:04.008 [2024-11-21 00:02:54.229578] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.008 [2024-11-21 00:02:54.245386] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:04.008 [2024-11-21 00:02:54.274877] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:04.008 [2024-11-21 00:02:54.275868] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:04.008 [2024-11-21 00:02:54.285329] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:04.008 [2024-11-21 00:02:54.285579] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:04.008 [2024-11-21 00:02:54.285590] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.008 [2024-11-21 00:02:54.301399] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:04.008 [2024-11-21 00:02:54.333352] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:04.008 [2024-11-21 00:02:54.334008] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:04.008 [2024-11-21 00:02:54.341325] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:04.008 [2024-11-21 00:02:54.341564] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:04.008 [2024-11-21 00:02:54.341575] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.008 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:04.267 [2024-11-21 00:02:54.533372] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:04.267 [2024-11-21 00:02:54.534646] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:04.267 [2024-11-21 00:02:54.534676] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:04.267 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:04.267 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.267 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:04.267 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.267 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.267 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.267 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.267 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:04.267 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.267 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:04.525 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:04.526 00:14:04.526 real 0m2.064s 00:14:04.526 user 0m0.825s 00:14:04.526 sys 0m0.121s 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.526 ************************************ 00:14:04.526 END TEST test_create_multi_ublk 00:14:04.526 ************************************ 00:14:04.526 00:02:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.784 00:02:54 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:04.784 00:02:54 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:04.784 00:02:54 ublk -- ublk/ublk.sh@130 -- # killprocess 82448 00:14:04.784 00:02:54 ublk -- common/autotest_common.sh@950 -- # '[' -z 82448 ']' 00:14:04.784 00:02:54 ublk -- common/autotest_common.sh@954 -- # kill -0 82448 00:14:04.784 00:02:54 ublk -- common/autotest_common.sh@955 -- # uname 00:14:04.784 00:02:54 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.784 00:02:54 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82448 00:14:04.784 00:02:54 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:04.784 00:02:54 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:04.784 killing process with pid 82448 00:14:04.784 00:02:54 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82448' 00:14:04.784 00:02:54 ublk -- common/autotest_common.sh@969 -- # kill 82448 00:14:04.784 00:02:54 ublk -- common/autotest_common.sh@974 -- # wait 82448 00:14:05.043 [2024-11-21 00:02:55.218305] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:05.043 [2024-11-21 00:02:55.218381] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:05.302 00:14:05.302 real 0m18.500s 00:14:05.302 user 0m28.386s 00:14:05.302 sys 0m7.759s 00:14:05.302 00:02:55 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.303 ************************************ 00:14:05.303 END TEST ublk 00:14:05.303 ************************************ 00:14:05.303 00:02:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.303 00:02:55 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:05.303 00:02:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:05.303 00:02:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.303 00:02:55 -- common/autotest_common.sh@10 -- # set +x 00:14:05.303 ************************************ 00:14:05.303 START TEST ublk_recovery 00:14:05.303 ************************************ 00:14:05.303 00:02:55 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:05.303 * Looking for test storage... 00:14:05.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:05.303 00:02:55 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:05.303 00:02:55 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:05.303 00:02:55 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:05.564 00:02:55 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.564 00:02:55 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:05.564 00:02:55 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.564 00:02:55 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:05.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.564 --rc genhtml_branch_coverage=1 00:14:05.564 --rc genhtml_function_coverage=1 00:14:05.564 --rc genhtml_legend=1 00:14:05.564 --rc geninfo_all_blocks=1 00:14:05.564 --rc geninfo_unexecuted_blocks=1 00:14:05.564 00:14:05.564 ' 00:14:05.564 00:02:55 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:05.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.564 --rc genhtml_branch_coverage=1 00:14:05.564 --rc genhtml_function_coverage=1 00:14:05.564 --rc genhtml_legend=1 00:14:05.564 --rc geninfo_all_blocks=1 00:14:05.564 --rc geninfo_unexecuted_blocks=1 00:14:05.564 00:14:05.564 ' 00:14:05.564 00:02:55 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:05.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.564 --rc genhtml_branch_coverage=1 00:14:05.564 --rc genhtml_function_coverage=1 00:14:05.564 --rc genhtml_legend=1 00:14:05.564 --rc geninfo_all_blocks=1 00:14:05.564 --rc geninfo_unexecuted_blocks=1 00:14:05.564 00:14:05.564 ' 00:14:05.564 00:02:55 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:05.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.564 --rc genhtml_branch_coverage=1 00:14:05.564 --rc genhtml_function_coverage=1 00:14:05.564 --rc genhtml_legend=1 00:14:05.564 --rc geninfo_all_blocks=1 00:14:05.564 --rc geninfo_unexecuted_blocks=1 00:14:05.564 00:14:05.564 ' 00:14:05.564 00:02:55 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:05.564 00:02:55 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:05.564 00:02:55 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:05.564 00:02:55 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:05.564 00:02:55 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:05.564 00:02:55 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:05.564 00:02:55 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:05.564 00:02:55 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:05.564 00:02:55 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:05.564 00:02:55 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:05.564 00:02:55 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=82818 00:14:05.564 00:02:55 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:05.564 00:02:55 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 82818 00:14:05.564 00:02:55 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 82818 ']' 00:14:05.564 00:02:55 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:05.564 00:02:55 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.564 00:02:55 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.564 00:02:55 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.564 00:02:55 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.564 00:02:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:05.564 [2024-11-21 00:02:55.851411] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:05.564 [2024-11-21 00:02:55.851544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82818 ] 00:14:05.825 [2024-11-21 00:02:55.985543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:05.825 [2024-11-21 00:02:56.033336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.825 [2024-11-21 00:02:56.033405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.395 00:02:56 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.395 00:02:56 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:06.395 00:02:56 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:06.395 00:02:56 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.395 00:02:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 [2024-11-21 00:02:56.691319] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:06.395 [2024-11-21 00:02:56.692673] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:06.395 00:02:56 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.395 00:02:56 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:06.395 00:02:56 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.395 00:02:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 malloc0 00:14:06.395 00:02:56 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.395 00:02:56 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:06.395 00:02:56 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.395 00:02:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 [2024-11-21 00:02:56.731443] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:06.395 [2024-11-21 00:02:56.731543] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:06.395 [2024-11-21 00:02:56.731551] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:06.395 [2024-11-21 00:02:56.731568] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:06.395 [2024-11-21 00:02:56.740427] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:06.395 [2024-11-21 00:02:56.740454] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:06.395 [2024-11-21 00:02:56.747335] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:06.395 [2024-11-21 00:02:56.747496] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:06.395 [2024-11-21 00:02:56.770322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:06.395 1 00:14:06.395 00:02:56 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.395 00:02:56 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:07.778 00:02:57 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=82851 00:14:07.778 00:02:57 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:07.778 00:02:57 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:07.778 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:07.778 fio-3.35 00:14:07.778 Starting 1 process 00:14:13.041 00:03:02 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 82818 00:14:13.041 00:03:02 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:18.329 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 82818 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:18.329 00:03:07 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=82962 00:14:18.329 00:03:07 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:18.329 00:03:07 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:18.329 00:03:07 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 82962 00:14:18.329 00:03:07 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 82962 ']' 00:14:18.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.329 00:03:07 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.329 00:03:07 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.329 00:03:07 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.329 00:03:07 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.329 00:03:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.329 [2024-11-21 00:03:07.878414] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:18.329 [2024-11-21 00:03:07.879284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82962 ] 00:14:18.329 [2024-11-21 00:03:08.016487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:18.329 [2024-11-21 00:03:08.077761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.329 [2024-11-21 00:03:08.077810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.329 00:03:08 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.329 00:03:08 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:18.329 00:03:08 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:18.329 00:03:08 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.329 00:03:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.329 [2024-11-21 00:03:08.726315] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:18.329 [2024-11-21 00:03:08.727558] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:18.329 00:03:08 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.329 00:03:08 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:18.329 00:03:08 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.329 00:03:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.587 malloc0 00:14:18.587 00:03:08 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.587 00:03:08 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:18.587 00:03:08 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.587 00:03:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.587 [2024-11-21 00:03:08.766422] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:18.587 [2024-11-21 00:03:08.766465] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:18.587 [2024-11-21 00:03:08.766472] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:18.587 [2024-11-21 00:03:08.774352] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:18.587 [2024-11-21 00:03:08.774369] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:18.587 [2024-11-21 00:03:08.774382] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:18.587 1 00:14:18.587 [2024-11-21 00:03:08.774441] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:18.587 00:03:08 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.587 00:03:08 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 82851 00:14:18.587 [2024-11-21 00:03:08.782326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:18.587 [2024-11-21 00:03:08.788821] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:18.587 [2024-11-21 00:03:08.796515] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:18.587 [2024-11-21 00:03:08.796532] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:14.947 00:15:14.947 fio_test: (groupid=0, jobs=1): err= 0: pid=82854: Thu Nov 21 00:03:58 2024 00:15:14.947 read: IOPS=25.3k, BW=99.0MiB/s (104MB/s)(5941MiB/60002msec) 00:15:14.947 slat (nsec): min=1132, max=309733, avg=5537.83, stdev=1582.32 00:15:14.947 clat (usec): min=663, max=6018.9k, avg=2435.62, stdev=35844.55 00:15:14.947 lat (usec): min=675, max=6018.9k, avg=2441.16, stdev=35844.55 00:15:14.947 clat percentiles (usec): 00:15:14.947 | 1.00th=[ 1876], 5.00th=[ 2008], 10.00th=[ 2024], 20.00th=[ 2057], 00:15:14.947 | 30.00th=[ 2073], 40.00th=[ 2089], 50.00th=[ 2114], 60.00th=[ 2114], 00:15:14.947 | 70.00th=[ 2147], 80.00th=[ 2147], 90.00th=[ 2212], 95.00th=[ 3064], 00:15:14.947 | 99.00th=[ 5014], 99.50th=[ 5342], 99.90th=[ 6718], 99.95th=[ 7373], 00:15:14.947 | 99.99th=[12256] 00:15:14.947 bw ( KiB/s): min=21064, max=116264, per=100.00%, avg=111723.58, stdev=11825.95, samples=108 00:15:14.947 iops : min= 5266, max=29066, avg=27930.89, stdev=2956.48, samples=108 00:15:14.947 write: IOPS=25.3k, BW=98.9MiB/s (104MB/s)(5934MiB/60002msec); 0 zone resets 00:15:14.947 slat (nsec): min=1490, max=240163, avg=5824.05, stdev=1644.65 00:15:14.947 clat (usec): min=632, max=6019.1k, avg=2605.10, stdev=41986.94 00:15:14.947 lat (usec): min=641, max=6019.2k, avg=2610.93, stdev=41986.94 00:15:14.947 clat percentiles (usec): 00:15:14.947 | 1.00th=[ 1926], 5.00th=[ 2114], 10.00th=[ 2147], 20.00th=[ 2147], 00:15:14.947 | 30.00th=[ 2180], 40.00th=[ 2180], 50.00th=[ 2212], 60.00th=[ 2212], 00:15:14.947 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2311], 95.00th=[ 3032], 00:15:14.947 | 99.00th=[ 5014], 99.50th=[ 5407], 99.90th=[ 6915], 99.95th=[ 7504], 00:15:14.947 | 99.99th=[12911] 00:15:14.947 bw ( KiB/s): min=21592, max=116128, per=100.00%, avg=111569.92, stdev=11762.62, samples=108 00:15:14.947 iops : min= 5398, max=29032, avg=27892.47, stdev=2940.65, samples=108 00:15:14.947 lat (usec) : 750=0.01%, 1000=0.01% 00:15:14.947 lat (msec) : 2=3.07%, 4=94.28%, 10=2.63%, 20=0.01%, >=2000=0.01% 00:15:14.947 cpu : usr=5.43%, sys=29.29%, ctx=99319, majf=0, minf=13 00:15:14.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:14.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:14.947 issued rwts: total=1520904,1519005,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:14.947 00:15:14.947 Run status group 0 (all jobs): 00:15:14.947 READ: bw=99.0MiB/s (104MB/s), 99.0MiB/s-99.0MiB/s (104MB/s-104MB/s), io=5941MiB (6230MB), run=60002-60002msec 00:15:14.947 WRITE: bw=98.9MiB/s (104MB/s), 98.9MiB/s-98.9MiB/s (104MB/s-104MB/s), io=5934MiB (6222MB), run=60002-60002msec 00:15:14.947 00:15:14.947 Disk stats (read/write): 00:15:14.947 ublkb1: ios=1518033/1516071, merge=0/0, ticks=3615776/3736128, in_queue=7351904, util=99.91% 00:15:14.947 00:03:58 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:14.947 [2024-11-21 00:03:58.032516] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:14.947 [2024-11-21 00:03:58.066443] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:14.947 [2024-11-21 00:03:58.066683] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:14.947 [2024-11-21 00:03:58.073612] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:14.947 [2024-11-21 00:03:58.073784] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:14.947 [2024-11-21 00:03:58.077306] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.947 00:03:58 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:14.947 [2024-11-21 00:03:58.081698] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:14.947 [2024-11-21 00:03:58.085636] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:14.947 [2024-11-21 00:03:58.085739] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.947 00:03:58 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:14.947 00:03:58 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:14.947 00:03:58 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 82962 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 82962 ']' 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 82962 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82962 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82962' 00:15:14.947 killing process with pid 82962 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@969 -- # kill 82962 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@974 -- # wait 82962 00:15:14.947 [2024-11-21 00:03:58.342610] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:14.947 [2024-11-21 00:03:58.342682] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:14.947 ************************************ 00:15:14.947 END TEST ublk_recovery 00:15:14.947 ************************************ 00:15:14.947 00:15:14.947 real 1m3.103s 00:15:14.947 user 1m40.552s 00:15:14.947 sys 0m35.771s 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:14.947 00:03:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:14.947 00:03:58 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:14.947 00:03:58 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:14.947 00:03:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:14.947 00:03:58 -- common/autotest_common.sh@10 -- # set +x 00:15:14.947 00:03:58 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:14.947 00:03:58 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:14.947 00:03:58 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:14.947 00:03:58 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:14.947 00:03:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:14.947 00:03:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:14.947 00:03:58 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:14.947 00:03:58 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:14.947 00:03:58 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:14.947 00:03:58 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:14.947 00:03:58 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:14.947 00:03:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:14.947 00:03:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.947 00:03:58 -- common/autotest_common.sh@10 -- # set +x 00:15:14.947 ************************************ 00:15:14.947 START TEST ftl 00:15:14.947 ************************************ 00:15:14.947 00:03:58 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:14.947 * Looking for test storage... 00:15:14.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:14.947 00:03:58 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:14.947 00:03:58 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:14.947 00:03:58 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:14.948 00:03:58 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:14.948 00:03:58 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.948 00:03:58 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.948 00:03:58 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.948 00:03:58 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.948 00:03:58 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.948 00:03:58 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.948 00:03:58 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.948 00:03:58 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.948 00:03:58 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.948 00:03:58 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.948 00:03:58 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.948 00:03:58 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:14.948 00:03:58 ftl -- scripts/common.sh@345 -- # : 1 00:15:14.948 00:03:58 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.948 00:03:58 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.948 00:03:58 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:14.948 00:03:58 ftl -- scripts/common.sh@353 -- # local d=1 00:15:14.948 00:03:58 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.948 00:03:58 ftl -- scripts/common.sh@355 -- # echo 1 00:15:14.948 00:03:58 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.948 00:03:58 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:14.948 00:03:58 ftl -- scripts/common.sh@353 -- # local d=2 00:15:14.948 00:03:58 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.948 00:03:58 ftl -- scripts/common.sh@355 -- # echo 2 00:15:14.948 00:03:58 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.948 00:03:58 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.948 00:03:58 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.948 00:03:58 ftl -- scripts/common.sh@368 -- # return 0 00:15:14.948 00:03:58 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.948 00:03:58 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:14.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.948 --rc genhtml_branch_coverage=1 00:15:14.948 --rc genhtml_function_coverage=1 00:15:14.948 --rc genhtml_legend=1 00:15:14.948 --rc geninfo_all_blocks=1 00:15:14.948 --rc geninfo_unexecuted_blocks=1 00:15:14.948 00:15:14.948 ' 00:15:14.948 00:03:58 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:14.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.948 --rc genhtml_branch_coverage=1 00:15:14.948 --rc genhtml_function_coverage=1 00:15:14.948 --rc genhtml_legend=1 00:15:14.948 --rc geninfo_all_blocks=1 00:15:14.948 --rc geninfo_unexecuted_blocks=1 00:15:14.948 00:15:14.948 ' 00:15:14.948 00:03:58 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:14.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.948 --rc genhtml_branch_coverage=1 00:15:14.948 --rc genhtml_function_coverage=1 00:15:14.948 --rc genhtml_legend=1 00:15:14.948 --rc geninfo_all_blocks=1 00:15:14.948 --rc geninfo_unexecuted_blocks=1 00:15:14.948 00:15:14.948 ' 00:15:14.948 00:03:58 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:14.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.948 --rc genhtml_branch_coverage=1 00:15:14.948 --rc genhtml_function_coverage=1 00:15:14.948 --rc genhtml_legend=1 00:15:14.948 --rc geninfo_all_blocks=1 00:15:14.948 --rc geninfo_unexecuted_blocks=1 00:15:14.948 00:15:14.948 ' 00:15:14.948 00:03:58 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:14.948 00:03:58 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:14.948 00:03:58 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:14.948 00:03:58 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:14.948 00:03:58 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:14.948 00:03:58 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:14.948 00:03:58 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.948 00:03:58 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:14.948 00:03:58 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:14.948 00:03:58 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.948 00:03:58 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.948 00:03:58 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:14.948 00:03:58 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:14.948 00:03:58 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:14.948 00:03:58 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:14.948 00:03:58 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:14.948 00:03:58 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:14.948 00:03:58 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.948 00:03:58 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.948 00:03:58 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:14.948 00:03:58 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:14.948 00:03:58 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:14.948 00:03:58 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:14.948 00:03:58 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:14.948 00:03:58 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:14.948 00:03:58 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:14.948 00:03:58 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:14.948 00:03:58 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:14.948 00:03:58 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:14.948 00:03:58 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.948 00:03:58 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:14.948 00:03:58 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:14.948 00:03:58 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:14.948 00:03:58 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:14.948 00:03:59 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:14.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:14.948 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:14.948 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:14.948 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:14.948 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:14.948 00:03:59 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=83755 00:15:14.948 00:03:59 ftl -- ftl/ftl.sh@38 -- # waitforlisten 83755 00:15:14.948 00:03:59 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:14.948 00:03:59 ftl -- common/autotest_common.sh@831 -- # '[' -z 83755 ']' 00:15:14.948 00:03:59 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.948 00:03:59 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.948 00:03:59 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.948 00:03:59 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.948 00:03:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:14.948 [2024-11-21 00:03:59.592986] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:14.948 [2024-11-21 00:03:59.593273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83755 ] 00:15:14.948 [2024-11-21 00:03:59.724624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.948 [2024-11-21 00:03:59.778614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.948 00:04:00 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.948 00:04:00 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:14.948 00:04:00 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:14.948 00:04:00 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:14.948 00:04:00 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:14.948 00:04:00 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@50 -- # break 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@63 -- # break 00:15:14.948 00:04:01 ftl -- ftl/ftl.sh@66 -- # killprocess 83755 00:15:14.948 00:04:01 ftl -- common/autotest_common.sh@950 -- # '[' -z 83755 ']' 00:15:14.948 00:04:01 ftl -- common/autotest_common.sh@954 -- # kill -0 83755 00:15:14.948 00:04:01 ftl -- common/autotest_common.sh@955 -- # uname 00:15:14.948 00:04:01 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.948 00:04:01 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83755 00:15:14.948 killing process with pid 83755 00:15:14.948 00:04:01 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:14.948 00:04:01 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:14.948 00:04:01 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83755' 00:15:14.948 00:04:01 ftl -- common/autotest_common.sh@969 -- # kill 83755 00:15:14.949 00:04:01 ftl -- common/autotest_common.sh@974 -- # wait 83755 00:15:14.949 00:04:02 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:14.949 00:04:02 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:14.949 00:04:02 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:14.949 00:04:02 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.949 00:04:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:14.949 ************************************ 00:15:14.949 START TEST ftl_fio_basic 00:15:14.949 ************************************ 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:14.949 * Looking for test storage... 00:15:14.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:14.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.949 --rc genhtml_branch_coverage=1 00:15:14.949 --rc genhtml_function_coverage=1 00:15:14.949 --rc genhtml_legend=1 00:15:14.949 --rc geninfo_all_blocks=1 00:15:14.949 --rc geninfo_unexecuted_blocks=1 00:15:14.949 00:15:14.949 ' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:14.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.949 --rc genhtml_branch_coverage=1 00:15:14.949 --rc genhtml_function_coverage=1 00:15:14.949 --rc genhtml_legend=1 00:15:14.949 --rc geninfo_all_blocks=1 00:15:14.949 --rc geninfo_unexecuted_blocks=1 00:15:14.949 00:15:14.949 ' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:14.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.949 --rc genhtml_branch_coverage=1 00:15:14.949 --rc genhtml_function_coverage=1 00:15:14.949 --rc genhtml_legend=1 00:15:14.949 --rc geninfo_all_blocks=1 00:15:14.949 --rc geninfo_unexecuted_blocks=1 00:15:14.949 00:15:14.949 ' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:14.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.949 --rc genhtml_branch_coverage=1 00:15:14.949 --rc genhtml_function_coverage=1 00:15:14.949 --rc genhtml_legend=1 00:15:14.949 --rc geninfo_all_blocks=1 00:15:14.949 --rc geninfo_unexecuted_blocks=1 00:15:14.949 00:15:14.949 ' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=83875 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 83875 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 83875 ']' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:14.949 00:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:14.949 [2024-11-21 00:04:02.467111] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:14.949 [2024-11-21 00:04:02.467819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83875 ] 00:15:14.949 [2024-11-21 00:04:02.606594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:14.949 [2024-11-21 00:04:02.650673] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.949 [2024-11-21 00:04:02.650944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.950 [2024-11-21 00:04:02.650997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:14.950 { 00:15:14.950 "name": "nvme0n1", 00:15:14.950 "aliases": [ 00:15:14.950 "a4a03dce-c33a-4740-8488-0834408aebb7" 00:15:14.950 ], 00:15:14.950 "product_name": "NVMe disk", 00:15:14.950 "block_size": 4096, 00:15:14.950 "num_blocks": 1310720, 00:15:14.950 "uuid": "a4a03dce-c33a-4740-8488-0834408aebb7", 00:15:14.950 "numa_id": -1, 00:15:14.950 "assigned_rate_limits": { 00:15:14.950 "rw_ios_per_sec": 0, 00:15:14.950 "rw_mbytes_per_sec": 0, 00:15:14.950 "r_mbytes_per_sec": 0, 00:15:14.950 "w_mbytes_per_sec": 0 00:15:14.950 }, 00:15:14.950 "claimed": false, 00:15:14.950 "zoned": false, 00:15:14.950 "supported_io_types": { 00:15:14.950 "read": true, 00:15:14.950 "write": true, 00:15:14.950 "unmap": true, 00:15:14.950 "flush": true, 00:15:14.950 "reset": true, 00:15:14.950 "nvme_admin": true, 00:15:14.950 "nvme_io": true, 00:15:14.950 "nvme_io_md": false, 00:15:14.950 "write_zeroes": true, 00:15:14.950 "zcopy": false, 00:15:14.950 "get_zone_info": false, 00:15:14.950 "zone_management": false, 00:15:14.950 "zone_append": false, 00:15:14.950 "compare": true, 00:15:14.950 "compare_and_write": false, 00:15:14.950 "abort": true, 00:15:14.950 "seek_hole": false, 00:15:14.950 "seek_data": false, 00:15:14.950 "copy": true, 00:15:14.950 "nvme_iov_md": false 00:15:14.950 }, 00:15:14.950 "driver_specific": { 00:15:14.950 "nvme": [ 00:15:14.950 { 00:15:14.950 "pci_address": "0000:00:11.0", 00:15:14.950 "trid": { 00:15:14.950 "trtype": "PCIe", 00:15:14.950 "traddr": "0000:00:11.0" 00:15:14.950 }, 00:15:14.950 "ctrlr_data": { 00:15:14.950 "cntlid": 0, 00:15:14.950 "vendor_id": "0x1b36", 00:15:14.950 "model_number": "QEMU NVMe Ctrl", 00:15:14.950 "serial_number": "12341", 00:15:14.950 "firmware_revision": "8.0.0", 00:15:14.950 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:14.950 "oacs": { 00:15:14.950 "security": 0, 00:15:14.950 "format": 1, 00:15:14.950 "firmware": 0, 00:15:14.950 "ns_manage": 1 00:15:14.950 }, 00:15:14.950 "multi_ctrlr": false, 00:15:14.950 "ana_reporting": false 00:15:14.950 }, 00:15:14.950 "vs": { 00:15:14.950 "nvme_version": "1.4" 00:15:14.950 }, 00:15:14.950 "ns_data": { 00:15:14.950 "id": 1, 00:15:14.950 "can_share": false 00:15:14.950 } 00:15:14.950 } 00:15:14.950 ], 00:15:14.950 "mp_policy": "active_passive" 00:15:14.950 } 00:15:14.950 } 00:15:14.950 ]' 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:14.950 00:04:03 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=137f9d3d-0bea-4000-9744-25aad81576ac 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 137f9d3d-0bea-4000-9744-25aad81576ac 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=250a71d1-afaa-47ce-a753-87d2552e81b1 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 250a71d1-afaa-47ce-a753-87d2552e81b1 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=250a71d1-afaa-47ce-a753-87d2552e81b1 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 250a71d1-afaa-47ce-a753-87d2552e81b1 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=250a71d1-afaa-47ce-a753-87d2552e81b1 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 250a71d1-afaa-47ce-a753-87d2552e81b1 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:14.950 { 00:15:14.950 "name": "250a71d1-afaa-47ce-a753-87d2552e81b1", 00:15:14.950 "aliases": [ 00:15:14.950 "lvs/nvme0n1p0" 00:15:14.950 ], 00:15:14.950 "product_name": "Logical Volume", 00:15:14.950 "block_size": 4096, 00:15:14.950 "num_blocks": 26476544, 00:15:14.950 "uuid": "250a71d1-afaa-47ce-a753-87d2552e81b1", 00:15:14.950 "assigned_rate_limits": { 00:15:14.950 "rw_ios_per_sec": 0, 00:15:14.950 "rw_mbytes_per_sec": 0, 00:15:14.950 "r_mbytes_per_sec": 0, 00:15:14.950 "w_mbytes_per_sec": 0 00:15:14.950 }, 00:15:14.950 "claimed": false, 00:15:14.950 "zoned": false, 00:15:14.950 "supported_io_types": { 00:15:14.950 "read": true, 00:15:14.950 "write": true, 00:15:14.950 "unmap": true, 00:15:14.950 "flush": false, 00:15:14.950 "reset": true, 00:15:14.950 "nvme_admin": false, 00:15:14.950 "nvme_io": false, 00:15:14.950 "nvme_io_md": false, 00:15:14.950 "write_zeroes": true, 00:15:14.950 "zcopy": false, 00:15:14.950 "get_zone_info": false, 00:15:14.950 "zone_management": false, 00:15:14.950 "zone_append": false, 00:15:14.950 "compare": false, 00:15:14.950 "compare_and_write": false, 00:15:14.950 "abort": false, 00:15:14.950 "seek_hole": true, 00:15:14.950 "seek_data": true, 00:15:14.950 "copy": false, 00:15:14.950 "nvme_iov_md": false 00:15:14.950 }, 00:15:14.950 "driver_specific": { 00:15:14.950 "lvol": { 00:15:14.950 "lvol_store_uuid": "137f9d3d-0bea-4000-9744-25aad81576ac", 00:15:14.950 "base_bdev": "nvme0n1", 00:15:14.950 "thin_provision": true, 00:15:14.950 "num_allocated_clusters": 0, 00:15:14.950 "snapshot": false, 00:15:14.950 "clone": false, 00:15:14.950 "esnap_clone": false 00:15:14.950 } 00:15:14.950 } 00:15:14.950 } 00:15:14.950 ]' 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 250a71d1-afaa-47ce-a753-87d2552e81b1 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=250a71d1-afaa-47ce-a753-87d2552e81b1 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:14.950 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:14.951 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:14.951 00:04:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 250a71d1-afaa-47ce-a753-87d2552e81b1 00:15:14.951 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:14.951 { 00:15:14.951 "name": "250a71d1-afaa-47ce-a753-87d2552e81b1", 00:15:14.951 "aliases": [ 00:15:14.951 "lvs/nvme0n1p0" 00:15:14.951 ], 00:15:14.951 "product_name": "Logical Volume", 00:15:14.951 "block_size": 4096, 00:15:14.951 "num_blocks": 26476544, 00:15:14.951 "uuid": "250a71d1-afaa-47ce-a753-87d2552e81b1", 00:15:14.951 "assigned_rate_limits": { 00:15:14.951 "rw_ios_per_sec": 0, 00:15:14.951 "rw_mbytes_per_sec": 0, 00:15:14.951 "r_mbytes_per_sec": 0, 00:15:14.951 "w_mbytes_per_sec": 0 00:15:14.951 }, 00:15:14.951 "claimed": false, 00:15:14.951 "zoned": false, 00:15:14.951 "supported_io_types": { 00:15:14.951 "read": true, 00:15:14.951 "write": true, 00:15:14.951 "unmap": true, 00:15:14.951 "flush": false, 00:15:14.951 "reset": true, 00:15:14.951 "nvme_admin": false, 00:15:14.951 "nvme_io": false, 00:15:14.951 "nvme_io_md": false, 00:15:14.951 "write_zeroes": true, 00:15:14.951 "zcopy": false, 00:15:14.951 "get_zone_info": false, 00:15:14.951 "zone_management": false, 00:15:14.951 "zone_append": false, 00:15:14.951 "compare": false, 00:15:14.951 "compare_and_write": false, 00:15:14.951 "abort": false, 00:15:14.951 "seek_hole": true, 00:15:14.951 "seek_data": true, 00:15:14.951 "copy": false, 00:15:14.951 "nvme_iov_md": false 00:15:14.951 }, 00:15:14.951 "driver_specific": { 00:15:14.951 "lvol": { 00:15:14.951 "lvol_store_uuid": "137f9d3d-0bea-4000-9744-25aad81576ac", 00:15:14.951 "base_bdev": "nvme0n1", 00:15:14.951 "thin_provision": true, 00:15:14.951 "num_allocated_clusters": 0, 00:15:14.951 "snapshot": false, 00:15:14.951 "clone": false, 00:15:14.951 "esnap_clone": false 00:15:14.951 } 00:15:14.951 } 00:15:14.951 } 00:15:14.951 ]' 00:15:14.951 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:14.951 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:14.951 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:14.951 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:14.951 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:14.951 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:14.951 00:04:05 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:14.951 00:04:05 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:15.210 00:04:05 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:15.210 00:04:05 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:15.210 00:04:05 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:15.210 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:15.210 00:04:05 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 250a71d1-afaa-47ce-a753-87d2552e81b1 00:15:15.210 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=250a71d1-afaa-47ce-a753-87d2552e81b1 00:15:15.210 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:15.210 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:15.210 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:15.210 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 250a71d1-afaa-47ce-a753-87d2552e81b1 00:15:15.210 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:15.210 { 00:15:15.210 "name": "250a71d1-afaa-47ce-a753-87d2552e81b1", 00:15:15.210 "aliases": [ 00:15:15.210 "lvs/nvme0n1p0" 00:15:15.210 ], 00:15:15.210 "product_name": "Logical Volume", 00:15:15.210 "block_size": 4096, 00:15:15.210 "num_blocks": 26476544, 00:15:15.210 "uuid": "250a71d1-afaa-47ce-a753-87d2552e81b1", 00:15:15.210 "assigned_rate_limits": { 00:15:15.210 "rw_ios_per_sec": 0, 00:15:15.210 "rw_mbytes_per_sec": 0, 00:15:15.210 "r_mbytes_per_sec": 0, 00:15:15.210 "w_mbytes_per_sec": 0 00:15:15.210 }, 00:15:15.210 "claimed": false, 00:15:15.210 "zoned": false, 00:15:15.210 "supported_io_types": { 00:15:15.210 "read": true, 00:15:15.210 "write": true, 00:15:15.210 "unmap": true, 00:15:15.210 "flush": false, 00:15:15.210 "reset": true, 00:15:15.210 "nvme_admin": false, 00:15:15.210 "nvme_io": false, 00:15:15.210 "nvme_io_md": false, 00:15:15.210 "write_zeroes": true, 00:15:15.210 "zcopy": false, 00:15:15.210 "get_zone_info": false, 00:15:15.210 "zone_management": false, 00:15:15.210 "zone_append": false, 00:15:15.210 "compare": false, 00:15:15.210 "compare_and_write": false, 00:15:15.210 "abort": false, 00:15:15.210 "seek_hole": true, 00:15:15.210 "seek_data": true, 00:15:15.210 "copy": false, 00:15:15.210 "nvme_iov_md": false 00:15:15.210 }, 00:15:15.210 "driver_specific": { 00:15:15.210 "lvol": { 00:15:15.210 "lvol_store_uuid": "137f9d3d-0bea-4000-9744-25aad81576ac", 00:15:15.210 "base_bdev": "nvme0n1", 00:15:15.210 "thin_provision": true, 00:15:15.210 "num_allocated_clusters": 0, 00:15:15.210 "snapshot": false, 00:15:15.210 "clone": false, 00:15:15.210 "esnap_clone": false 00:15:15.210 } 00:15:15.210 } 00:15:15.210 } 00:15:15.210 ]' 00:15:15.210 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:15.210 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:15.210 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:15.471 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:15.471 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:15.471 00:04:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:15.471 00:04:05 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:15.471 00:04:05 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:15.471 00:04:05 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 250a71d1-afaa-47ce-a753-87d2552e81b1 -c nvc0n1p0 --l2p_dram_limit 60 00:15:15.471 [2024-11-21 00:04:05.819231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.471 [2024-11-21 00:04:05.819275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:15.471 [2024-11-21 00:04:05.819288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:15.471 [2024-11-21 00:04:05.819308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.471 [2024-11-21 00:04:05.819369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.471 [2024-11-21 00:04:05.819380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:15.471 [2024-11-21 00:04:05.819388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:15:15.471 [2024-11-21 00:04:05.819397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.471 [2024-11-21 00:04:05.819427] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:15.471 [2024-11-21 00:04:05.819653] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:15.471 [2024-11-21 00:04:05.819686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.471 [2024-11-21 00:04:05.819695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:15.471 [2024-11-21 00:04:05.819711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:15:15.471 [2024-11-21 00:04:05.819719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.471 [2024-11-21 00:04:05.819827] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID eb7e3a1f-c117-4f1c-bea7-0df261bedc17 00:15:15.471 [2024-11-21 00:04:05.821090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.472 [2024-11-21 00:04:05.821217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:15.472 [2024-11-21 00:04:05.821234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:15.472 [2024-11-21 00:04:05.821240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.472 [2024-11-21 00:04:05.828105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.472 [2024-11-21 00:04:05.828211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:15.472 [2024-11-21 00:04:05.828226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.783 ms 00:15:15.472 [2024-11-21 00:04:05.828233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.472 [2024-11-21 00:04:05.828334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.472 [2024-11-21 00:04:05.828353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:15.472 [2024-11-21 00:04:05.828361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:15:15.472 [2024-11-21 00:04:05.828367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.472 [2024-11-21 00:04:05.828414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.472 [2024-11-21 00:04:05.828423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:15.472 [2024-11-21 00:04:05.828431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:15.472 [2024-11-21 00:04:05.828437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.472 [2024-11-21 00:04:05.828464] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:15.472 [2024-11-21 00:04:05.830061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.472 [2024-11-21 00:04:05.830091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:15.472 [2024-11-21 00:04:05.830098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:15:15.472 [2024-11-21 00:04:05.830106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.472 [2024-11-21 00:04:05.830147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.472 [2024-11-21 00:04:05.830156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:15.472 [2024-11-21 00:04:05.830163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:15.472 [2024-11-21 00:04:05.830172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.472 [2024-11-21 00:04:05.830197] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:15.472 [2024-11-21 00:04:05.830331] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:15.472 [2024-11-21 00:04:05.830342] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:15.472 [2024-11-21 00:04:05.830355] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:15.472 [2024-11-21 00:04:05.830372] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:15.472 [2024-11-21 00:04:05.830390] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:15.472 [2024-11-21 00:04:05.830397] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:15.472 [2024-11-21 00:04:05.830409] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:15.472 [2024-11-21 00:04:05.830424] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:15.472 [2024-11-21 00:04:05.830432] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:15.472 [2024-11-21 00:04:05.830439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.472 [2024-11-21 00:04:05.830447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:15.472 [2024-11-21 00:04:05.830453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:15:15.472 [2024-11-21 00:04:05.830460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.472 [2024-11-21 00:04:05.830530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.472 [2024-11-21 00:04:05.830540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:15.472 [2024-11-21 00:04:05.830546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:15.472 [2024-11-21 00:04:05.830555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.472 [2024-11-21 00:04:05.830658] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:15.472 [2024-11-21 00:04:05.830673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:15.472 [2024-11-21 00:04:05.830680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:15.472 [2024-11-21 00:04:05.830688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:15.472 [2024-11-21 00:04:05.830694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:15.472 [2024-11-21 00:04:05.830702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:15.472 [2024-11-21 00:04:05.830707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:15.472 [2024-11-21 00:04:05.830715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:15.472 [2024-11-21 00:04:05.830723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:15.472 [2024-11-21 00:04:05.830732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:15.472 [2024-11-21 00:04:05.830738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:15.472 [2024-11-21 00:04:05.830745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:15.472 [2024-11-21 00:04:05.830751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:15.472 [2024-11-21 00:04:05.830761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:15.472 [2024-11-21 00:04:05.830768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:15.472 [2024-11-21 00:04:05.830776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:15.472 [2024-11-21 00:04:05.830782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:15.472 [2024-11-21 00:04:05.830790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:15.472 [2024-11-21 00:04:05.830799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:15.472 [2024-11-21 00:04:05.830807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:15.472 [2024-11-21 00:04:05.830814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:15.472 [2024-11-21 00:04:05.830822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:15.472 [2024-11-21 00:04:05.830827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:15.472 [2024-11-21 00:04:05.830835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:15.472 [2024-11-21 00:04:05.830841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:15.472 [2024-11-21 00:04:05.830849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:15.472 [2024-11-21 00:04:05.830855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:15.472 [2024-11-21 00:04:05.830863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:15.472 [2024-11-21 00:04:05.830869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:15.472 [2024-11-21 00:04:05.830878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:15.472 [2024-11-21 00:04:05.830885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:15.472 [2024-11-21 00:04:05.830892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:15.472 [2024-11-21 00:04:05.830898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:15.472 [2024-11-21 00:04:05.830905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:15.472 [2024-11-21 00:04:05.830911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:15.472 [2024-11-21 00:04:05.830920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:15.472 [2024-11-21 00:04:05.830926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:15.472 [2024-11-21 00:04:05.830934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:15.472 [2024-11-21 00:04:05.830941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:15.472 [2024-11-21 00:04:05.830948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:15.472 [2024-11-21 00:04:05.830954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:15.473 [2024-11-21 00:04:05.830961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:15.473 [2024-11-21 00:04:05.830967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:15.473 [2024-11-21 00:04:05.830975] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:15.473 [2024-11-21 00:04:05.830981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:15.473 [2024-11-21 00:04:05.830991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:15.473 [2024-11-21 00:04:05.830997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:15.473 [2024-11-21 00:04:05.831007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:15.473 [2024-11-21 00:04:05.831013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:15.473 [2024-11-21 00:04:05.831021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:15.473 [2024-11-21 00:04:05.831029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:15.473 [2024-11-21 00:04:05.831037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:15.473 [2024-11-21 00:04:05.831043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:15.473 [2024-11-21 00:04:05.831053] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:15.473 [2024-11-21 00:04:05.831062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:15.473 [2024-11-21 00:04:05.831070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:15.473 [2024-11-21 00:04:05.831077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:15.473 [2024-11-21 00:04:05.831085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:15.473 [2024-11-21 00:04:05.831091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:15.473 [2024-11-21 00:04:05.831100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:15.473 [2024-11-21 00:04:05.831106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:15.473 [2024-11-21 00:04:05.831115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:15.473 [2024-11-21 00:04:05.831120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:15.473 [2024-11-21 00:04:05.831127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:15.473 [2024-11-21 00:04:05.831133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:15.473 [2024-11-21 00:04:05.831140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:15.473 [2024-11-21 00:04:05.831145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:15.473 [2024-11-21 00:04:05.831153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:15.473 [2024-11-21 00:04:05.831158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:15.473 [2024-11-21 00:04:05.831165] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:15.473 [2024-11-21 00:04:05.831171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:15.473 [2024-11-21 00:04:05.831178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:15.473 [2024-11-21 00:04:05.831184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:15.473 [2024-11-21 00:04:05.831191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:15.473 [2024-11-21 00:04:05.831196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:15.473 [2024-11-21 00:04:05.831203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.473 [2024-11-21 00:04:05.831209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:15.473 [2024-11-21 00:04:05.831217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:15:15.473 [2024-11-21 00:04:05.831231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.473 [2024-11-21 00:04:05.831533] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:15.473 [2024-11-21 00:04:05.831587] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:18.014 [2024-11-21 00:04:08.282969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.014 [2024-11-21 00:04:08.283225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:18.014 [2024-11-21 00:04:08.283309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2451.658 ms 00:15:18.014 [2024-11-21 00:04:08.283337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.014 [2024-11-21 00:04:08.303966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.014 [2024-11-21 00:04:08.304154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:18.014 [2024-11-21 00:04:08.304179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.517 ms 00:15:18.014 [2024-11-21 00:04:08.304203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.014 [2024-11-21 00:04:08.304372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.014 [2024-11-21 00:04:08.304385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:18.014 [2024-11-21 00:04:08.304397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:15:18.014 [2024-11-21 00:04:08.304405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.014 [2024-11-21 00:04:08.317162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.014 [2024-11-21 00:04:08.317236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:18.014 [2024-11-21 00:04:08.317256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.679 ms 00:15:18.014 [2024-11-21 00:04:08.317272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.014 [2024-11-21 00:04:08.317330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.014 [2024-11-21 00:04:08.317339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:18.014 [2024-11-21 00:04:08.317352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:18.014 [2024-11-21 00:04:08.317359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.014 [2024-11-21 00:04:08.317800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.014 [2024-11-21 00:04:08.317824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:18.014 [2024-11-21 00:04:08.317837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:15:18.014 [2024-11-21 00:04:08.317847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.014 [2024-11-21 00:04:08.317995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.014 [2024-11-21 00:04:08.318049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:18.014 [2024-11-21 00:04:08.318063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:15:18.014 [2024-11-21 00:04:08.318084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.014 [2024-11-21 00:04:08.325165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.014 [2024-11-21 00:04:08.325334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:18.014 [2024-11-21 00:04:08.325354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.050 ms 00:15:18.014 [2024-11-21 00:04:08.325363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.014 [2024-11-21 00:04:08.334436] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:18.015 [2024-11-21 00:04:08.351774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.015 [2024-11-21 00:04:08.351807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:18.015 [2024-11-21 00:04:08.351818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.319 ms 00:15:18.015 [2024-11-21 00:04:08.351829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.015 [2024-11-21 00:04:08.398614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.015 [2024-11-21 00:04:08.398657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:18.015 [2024-11-21 00:04:08.398668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.744 ms 00:15:18.015 [2024-11-21 00:04:08.398681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.015 [2024-11-21 00:04:08.398873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.015 [2024-11-21 00:04:08.398886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:18.015 [2024-11-21 00:04:08.398897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:15:18.015 [2024-11-21 00:04:08.398907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.015 [2024-11-21 00:04:08.403019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.015 [2024-11-21 00:04:08.403054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:18.015 [2024-11-21 00:04:08.403067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.087 ms 00:15:18.015 [2024-11-21 00:04:08.403077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.015 [2024-11-21 00:04:08.405774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.015 [2024-11-21 00:04:08.405809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:18.015 [2024-11-21 00:04:08.405819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.658 ms 00:15:18.015 [2024-11-21 00:04:08.405828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.015 [2024-11-21 00:04:08.406139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.015 [2024-11-21 00:04:08.406158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:18.015 [2024-11-21 00:04:08.406167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:15:18.015 [2024-11-21 00:04:08.406178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.276 [2024-11-21 00:04:08.442081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.276 [2024-11-21 00:04:08.442128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:18.276 [2024-11-21 00:04:08.442141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.877 ms 00:15:18.276 [2024-11-21 00:04:08.442151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.276 [2024-11-21 00:04:08.446499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.276 [2024-11-21 00:04:08.446535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:18.276 [2024-11-21 00:04:08.446546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.278 ms 00:15:18.276 [2024-11-21 00:04:08.446558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.276 [2024-11-21 00:04:08.449790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.276 [2024-11-21 00:04:08.449823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:18.276 [2024-11-21 00:04:08.449834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.187 ms 00:15:18.276 [2024-11-21 00:04:08.449844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.276 [2024-11-21 00:04:08.452941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.276 [2024-11-21 00:04:08.453085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:18.276 [2024-11-21 00:04:08.453099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.052 ms 00:15:18.276 [2024-11-21 00:04:08.453110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.276 [2024-11-21 00:04:08.453155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.276 [2024-11-21 00:04:08.453178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:18.276 [2024-11-21 00:04:08.453187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:18.276 [2024-11-21 00:04:08.453205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.276 [2024-11-21 00:04:08.453323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.276 [2024-11-21 00:04:08.453347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:18.276 [2024-11-21 00:04:08.453357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:15:18.276 [2024-11-21 00:04:08.453369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.276 [2024-11-21 00:04:08.454419] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2634.690 ms, result 0 00:15:18.276 { 00:15:18.276 "name": "ftl0", 00:15:18.276 "uuid": "eb7e3a1f-c117-4f1c-bea7-0df261bedc17" 00:15:18.276 } 00:15:18.276 00:04:08 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:18.276 00:04:08 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:18.276 00:04:08 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:18.276 00:04:08 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:18.276 00:04:08 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:18.276 00:04:08 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:18.276 00:04:08 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:18.276 00:04:08 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:18.539 [ 00:15:18.539 { 00:15:18.539 "name": "ftl0", 00:15:18.539 "aliases": [ 00:15:18.539 "eb7e3a1f-c117-4f1c-bea7-0df261bedc17" 00:15:18.539 ], 00:15:18.539 "product_name": "FTL disk", 00:15:18.539 "block_size": 4096, 00:15:18.539 "num_blocks": 20971520, 00:15:18.539 "uuid": "eb7e3a1f-c117-4f1c-bea7-0df261bedc17", 00:15:18.539 "assigned_rate_limits": { 00:15:18.539 "rw_ios_per_sec": 0, 00:15:18.539 "rw_mbytes_per_sec": 0, 00:15:18.539 "r_mbytes_per_sec": 0, 00:15:18.539 "w_mbytes_per_sec": 0 00:15:18.539 }, 00:15:18.539 "claimed": false, 00:15:18.539 "zoned": false, 00:15:18.539 "supported_io_types": { 00:15:18.539 "read": true, 00:15:18.539 "write": true, 00:15:18.539 "unmap": true, 00:15:18.539 "flush": true, 00:15:18.539 "reset": false, 00:15:18.539 "nvme_admin": false, 00:15:18.539 "nvme_io": false, 00:15:18.539 "nvme_io_md": false, 00:15:18.539 "write_zeroes": true, 00:15:18.539 "zcopy": false, 00:15:18.539 "get_zone_info": false, 00:15:18.539 "zone_management": false, 00:15:18.539 "zone_append": false, 00:15:18.539 "compare": false, 00:15:18.539 "compare_and_write": false, 00:15:18.539 "abort": false, 00:15:18.539 "seek_hole": false, 00:15:18.539 "seek_data": false, 00:15:18.539 "copy": false, 00:15:18.539 "nvme_iov_md": false 00:15:18.539 }, 00:15:18.539 "driver_specific": { 00:15:18.539 "ftl": { 00:15:18.539 "base_bdev": "250a71d1-afaa-47ce-a753-87d2552e81b1", 00:15:18.539 "cache": "nvc0n1p0" 00:15:18.539 } 00:15:18.539 } 00:15:18.539 } 00:15:18.539 ] 00:15:18.539 00:04:08 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:18.539 00:04:08 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:18.539 00:04:08 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:18.801 00:04:09 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:18.801 00:04:09 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:19.061 [2024-11-21 00:04:09.263366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.061 [2024-11-21 00:04:09.263516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:19.061 [2024-11-21 00:04:09.263538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:19.061 [2024-11-21 00:04:09.263546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.061 [2024-11-21 00:04:09.263585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:19.061 [2024-11-21 00:04:09.264151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.061 [2024-11-21 00:04:09.264181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:19.061 [2024-11-21 00:04:09.264205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:15:19.061 [2024-11-21 00:04:09.264218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.061 [2024-11-21 00:04:09.264721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.061 [2024-11-21 00:04:09.264747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:19.061 [2024-11-21 00:04:09.264756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:15:19.061 [2024-11-21 00:04:09.264766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.061 [2024-11-21 00:04:09.268032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.061 [2024-11-21 00:04:09.268056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:19.061 [2024-11-21 00:04:09.268066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.236 ms 00:15:19.061 [2024-11-21 00:04:09.268076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.061 [2024-11-21 00:04:09.274260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.061 [2024-11-21 00:04:09.274314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:19.061 [2024-11-21 00:04:09.274324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.159 ms 00:15:19.061 [2024-11-21 00:04:09.274348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.061 [2024-11-21 00:04:09.276037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.061 [2024-11-21 00:04:09.276077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:19.061 [2024-11-21 00:04:09.276086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.589 ms 00:15:19.061 [2024-11-21 00:04:09.276099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.061 [2024-11-21 00:04:09.280521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.061 [2024-11-21 00:04:09.280560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:19.061 [2024-11-21 00:04:09.280570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.381 ms 00:15:19.061 [2024-11-21 00:04:09.280582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.061 [2024-11-21 00:04:09.280748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.061 [2024-11-21 00:04:09.280766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:19.061 [2024-11-21 00:04:09.280776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:15:19.061 [2024-11-21 00:04:09.280785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.061 [2024-11-21 00:04:09.282251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.061 [2024-11-21 00:04:09.282394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:19.061 [2024-11-21 00:04:09.282408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:15:19.061 [2024-11-21 00:04:09.282418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.061 [2024-11-21 00:04:09.284113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.061 [2024-11-21 00:04:09.284152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:19.061 [2024-11-21 00:04:09.284160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:15:19.061 [2024-11-21 00:04:09.284170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.061 [2024-11-21 00:04:09.285211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.061 [2024-11-21 00:04:09.285250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:19.061 [2024-11-21 00:04:09.285259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:15:19.061 [2024-11-21 00:04:09.285268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.061 [2024-11-21 00:04:09.286636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.061 [2024-11-21 00:04:09.286673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:19.061 [2024-11-21 00:04:09.286682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:15:19.061 [2024-11-21 00:04:09.286691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.062 [2024-11-21 00:04:09.286729] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:19.062 [2024-11-21 00:04:09.286749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.286993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-11-21 00:04:09.287448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-11-21 00:04:09.287654] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:19.063 [2024-11-21 00:04:09.287662] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eb7e3a1f-c117-4f1c-bea7-0df261bedc17 00:15:19.063 [2024-11-21 00:04:09.287672] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:19.063 [2024-11-21 00:04:09.287683] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:19.063 [2024-11-21 00:04:09.287692] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:19.063 [2024-11-21 00:04:09.287699] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:19.063 [2024-11-21 00:04:09.287708] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:19.063 [2024-11-21 00:04:09.287716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:19.063 [2024-11-21 00:04:09.287726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:19.063 [2024-11-21 00:04:09.287732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:19.063 [2024-11-21 00:04:09.287740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:19.063 [2024-11-21 00:04:09.287747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.063 [2024-11-21 00:04:09.287756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:19.063 [2024-11-21 00:04:09.287764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:15:19.063 [2024-11-21 00:04:09.287773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.063 [2024-11-21 00:04:09.289652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.063 [2024-11-21 00:04:09.289681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:19.063 [2024-11-21 00:04:09.289690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.836 ms 00:15:19.063 [2024-11-21 00:04:09.289699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.063 [2024-11-21 00:04:09.289787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.063 [2024-11-21 00:04:09.289798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:19.063 [2024-11-21 00:04:09.289806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:15:19.063 [2024-11-21 00:04:09.289815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.063 [2024-11-21 00:04:09.296371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.063 [2024-11-21 00:04:09.296513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:19.063 [2024-11-21 00:04:09.296528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.063 [2024-11-21 00:04:09.296538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.063 [2024-11-21 00:04:09.296616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.063 [2024-11-21 00:04:09.296632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:19.063 [2024-11-21 00:04:09.296640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.063 [2024-11-21 00:04:09.296649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.063 [2024-11-21 00:04:09.296748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.063 [2024-11-21 00:04:09.296768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:19.063 [2024-11-21 00:04:09.296776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.063 [2024-11-21 00:04:09.296785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.063 [2024-11-21 00:04:09.296811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.063 [2024-11-21 00:04:09.296821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:19.063 [2024-11-21 00:04:09.296828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.063 [2024-11-21 00:04:09.296837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.063 [2024-11-21 00:04:09.309011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.063 [2024-11-21 00:04:09.309050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:19.063 [2024-11-21 00:04:09.309061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.063 [2024-11-21 00:04:09.309082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.063 [2024-11-21 00:04:09.318939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.063 [2024-11-21 00:04:09.318973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:19.063 [2024-11-21 00:04:09.318982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.063 [2024-11-21 00:04:09.318990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.063 [2024-11-21 00:04:09.319084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.063 [2024-11-21 00:04:09.319098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:19.063 [2024-11-21 00:04:09.319105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.063 [2024-11-21 00:04:09.319113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.063 [2024-11-21 00:04:09.319182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.063 [2024-11-21 00:04:09.319191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:19.063 [2024-11-21 00:04:09.319198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.063 [2024-11-21 00:04:09.319218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.063 [2024-11-21 00:04:09.319308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.063 [2024-11-21 00:04:09.319320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:19.063 [2024-11-21 00:04:09.319328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.063 [2024-11-21 00:04:09.319336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.063 [2024-11-21 00:04:09.319384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.063 [2024-11-21 00:04:09.319395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:19.063 [2024-11-21 00:04:09.319402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.063 [2024-11-21 00:04:09.319409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.063 [2024-11-21 00:04:09.319468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-11-21 00:04:09.319480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:19.064 [2024-11-21 00:04:09.319489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-11-21 00:04:09.319505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-11-21 00:04:09.319555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-11-21 00:04:09.319566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:19.064 [2024-11-21 00:04:09.319574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-11-21 00:04:09.319582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-11-21 00:04:09.319747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.341 ms, result 0 00:15:19.064 true 00:15:19.064 00:04:09 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 83875 00:15:19.064 00:04:09 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 83875 ']' 00:15:19.064 00:04:09 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 83875 00:15:19.064 00:04:09 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:19.064 00:04:09 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.064 00:04:09 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83875 00:15:19.064 killing process with pid 83875 00:15:19.064 00:04:09 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:19.064 00:04:09 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:19.064 00:04:09 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83875' 00:15:19.064 00:04:09 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 83875 00:15:19.064 00:04:09 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 83875 00:15:24.350 00:04:14 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:24.350 00:04:14 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:24.350 00:04:14 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:24.350 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:24.350 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:24.350 00:04:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:24.350 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:24.350 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:24.350 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:24.350 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:24.350 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:24.351 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:24.351 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:24.351 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:24.351 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:24.351 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:24.351 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:24.351 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:24.351 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:24.351 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:24.351 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:24.351 00:04:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:24.351 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:24.351 fio-3.35 00:15:24.351 Starting 1 thread 00:15:28.552 00:15:28.552 test: (groupid=0, jobs=1): err= 0: pid=84034: Thu Nov 21 00:04:18 2024 00:15:28.552 read: IOPS=1031, BW=68.5MiB/s (71.8MB/s)(255MiB/3717msec) 00:15:28.552 slat (nsec): min=4236, max=34792, avg=6006.08, stdev=2512.53 00:15:28.552 clat (usec): min=259, max=1695, avg=435.49, stdev=161.88 00:15:28.552 lat (usec): min=264, max=1705, avg=441.50, stdev=163.29 00:15:28.552 clat percentiles (usec): 00:15:28.552 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 334], 00:15:28.552 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 355], 60.00th=[ 404], 00:15:28.552 | 70.00th=[ 465], 80.00th=[ 510], 90.00th=[ 603], 95.00th=[ 865], 00:15:28.552 | 99.00th=[ 1037], 99.50th=[ 1139], 99.90th=[ 1303], 99.95th=[ 1369], 00:15:28.552 | 99.99th=[ 1696] 00:15:28.552 write: IOPS=1038, BW=68.9MiB/s (72.3MB/s)(256MiB/3714msec); 0 zone resets 00:15:28.552 slat (usec): min=15, max=102, avg=19.73, stdev= 4.36 00:15:28.552 clat (usec): min=304, max=1941, avg=491.74, stdev=189.59 00:15:28.552 lat (usec): min=328, max=1971, avg=511.47, stdev=191.74 00:15:28.552 clat percentiles (usec): 00:15:28.552 | 1.00th=[ 347], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 359], 00:15:28.552 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 396], 60.00th=[ 494], 00:15:28.552 | 70.00th=[ 562], 80.00th=[ 578], 90.00th=[ 701], 95.00th=[ 963], 00:15:28.552 | 99.00th=[ 1188], 99.50th=[ 1287], 99.90th=[ 1614], 99.95th=[ 1811], 00:15:28.552 | 99.99th=[ 1942] 00:15:28.552 bw ( KiB/s): min=45832, max=87584, per=99.18%, avg=70020.57, stdev=16837.19, samples=7 00:15:28.552 iops : min= 674, max= 1288, avg=1029.71, stdev=247.61, samples=7 00:15:28.552 lat (usec) : 500=71.23%, 750=21.11%, 1000=5.50% 00:15:28.552 lat (msec) : 2=2.16% 00:15:28.552 cpu : usr=99.06%, sys=0.19%, ctx=8, majf=0, minf=1326 00:15:28.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:28.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.552 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:28.552 00:15:28.552 Run status group 0 (all jobs): 00:15:28.552 READ: bw=68.5MiB/s (71.8MB/s), 68.5MiB/s-68.5MiB/s (71.8MB/s-71.8MB/s), io=255MiB (267MB), run=3717-3717msec 00:15:28.552 WRITE: bw=68.9MiB/s (72.3MB/s), 68.9MiB/s-68.9MiB/s (72.3MB/s-72.3MB/s), io=256MiB (269MB), run=3714-3714msec 00:15:29.495 ----------------------------------------------------- 00:15:29.495 Suppressions used: 00:15:29.495 count bytes template 00:15:29.495 1 5 /usr/src/fio/parse.c 00:15:29.495 1 8 libtcmalloc_minimal.so 00:15:29.495 1 904 libcrypto.so 00:15:29.495 ----------------------------------------------------- 00:15:29.495 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:29.495 00:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:29.495 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:29.495 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:29.495 fio-3.35 00:15:29.495 Starting 2 threads 00:15:56.064 00:15:56.064 first_half: (groupid=0, jobs=1): err= 0: pid=84126: Thu Nov 21 00:04:44 2024 00:15:56.064 read: IOPS=2718, BW=10.6MiB/s (11.1MB/s)(255MiB/24003msec) 00:15:56.064 slat (nsec): min=3058, max=34663, avg=5609.40, stdev=1479.11 00:15:56.064 clat (usec): min=694, max=485612, avg=34068.66, stdev=21444.12 00:15:56.064 lat (usec): min=700, max=485618, avg=34074.27, stdev=21444.28 00:15:56.064 clat percentiles (msec): 00:15:56.064 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 31], 00:15:56.064 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:15:56.064 | 70.00th=[ 32], 80.00th=[ 36], 90.00th=[ 38], 95.00th=[ 45], 00:15:56.064 | 99.00th=[ 138], 99.50th=[ 159], 99.90th=[ 305], 99.95th=[ 435], 00:15:56.064 | 99.99th=[ 477] 00:15:56.064 write: IOPS=3173, BW=12.4MiB/s (13.0MB/s)(256MiB/20651msec); 0 zone resets 00:15:56.064 slat (usec): min=3, max=1007, avg= 7.04, stdev= 9.82 00:15:56.064 clat (usec): min=365, max=119510, avg=12941.33, stdev=22402.80 00:15:56.064 lat (usec): min=372, max=119516, avg=12948.37, stdev=22402.93 00:15:56.064 clat percentiles (usec): 00:15:56.064 | 1.00th=[ 676], 5.00th=[ 824], 10.00th=[ 1123], 20.00th=[ 1631], 00:15:56.064 | 30.00th=[ 2835], 40.00th=[ 3785], 50.00th=[ 4817], 60.00th=[ 5538], 00:15:56.064 | 70.00th=[ 6718], 80.00th=[ 14484], 90.00th=[ 30278], 95.00th=[ 79168], 00:15:56.064 | 99.00th=[ 95945], 99.50th=[106431], 99.90th=[112722], 99.95th=[113771], 00:15:56.064 | 99.99th=[116917] 00:15:56.064 bw ( KiB/s): min= 240, max=41392, per=82.60%, avg=20971.52, stdev=11592.95, samples=25 00:15:56.064 iops : min= 60, max=10348, avg=5242.88, stdev=2898.24, samples=25 00:15:56.064 lat (usec) : 500=0.02%, 750=1.44%, 1000=2.63% 00:15:56.064 lat (msec) : 2=7.79%, 4=9.50%, 10=17.13%, 20=7.17%, 50=47.81% 00:15:56.064 lat (msec) : 100=5.18%, 250=1.27%, 500=0.07% 00:15:56.064 cpu : usr=99.28%, sys=0.15%, ctx=54, majf=0, minf=5567 00:15:56.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:56.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.064 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:56.064 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.064 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:56.064 second_half: (groupid=0, jobs=1): err= 0: pid=84127: Thu Nov 21 00:04:44 2024 00:15:56.064 read: IOPS=2731, BW=10.7MiB/s (11.2MB/s)(254MiB/23845msec) 00:15:56.064 slat (usec): min=2, max=117, avg= 4.23, stdev= 1.76 00:15:56.064 clat (usec): min=636, max=439885, avg=34517.40, stdev=17872.38 00:15:56.064 lat (usec): min=641, max=439892, avg=34521.64, stdev=17872.58 00:15:56.064 clat percentiles (msec): 00:15:56.064 | 1.00th=[ 5], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 31], 00:15:56.064 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:15:56.064 | 70.00th=[ 32], 80.00th=[ 36], 90.00th=[ 39], 95.00th=[ 46], 00:15:56.064 | 99.00th=[ 129], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 300], 00:15:56.064 | 99.99th=[ 409] 00:15:56.064 write: IOPS=4106, BW=16.0MiB/s (16.8MB/s)(256MiB/15960msec); 0 zone resets 00:15:56.064 slat (usec): min=3, max=679, avg= 6.73, stdev= 5.32 00:15:56.064 clat (usec): min=383, max=118468, avg=12244.14, stdev=22114.00 00:15:56.064 lat (usec): min=390, max=118473, avg=12250.88, stdev=22114.27 00:15:56.064 clat percentiles (usec): 00:15:56.064 | 1.00th=[ 717], 5.00th=[ 906], 10.00th=[ 1172], 20.00th=[ 1434], 00:15:56.064 | 30.00th=[ 1876], 40.00th=[ 3195], 50.00th=[ 4424], 60.00th=[ 5342], 00:15:56.064 | 70.00th=[ 6521], 80.00th=[ 13960], 90.00th=[ 21365], 95.00th=[ 78119], 00:15:56.064 | 99.00th=[ 95945], 99.50th=[106431], 99.90th=[112722], 99.95th=[114820], 00:15:56.064 | 99.99th=[117965] 00:15:56.064 bw ( KiB/s): min=10944, max=41632, per=100.00%, avg=27594.11, stdev=9160.99, samples=19 00:15:56.064 iops : min= 2736, max=10408, avg=6898.53, stdev=2290.25, samples=19 00:15:56.064 lat (usec) : 500=0.02%, 750=0.82%, 1000=2.42% 00:15:56.064 lat (msec) : 2=12.62%, 4=7.96%, 10=13.50%, 20=8.44%, 50=47.59% 00:15:56.064 lat (msec) : 100=5.19%, 250=1.39%, 500=0.03% 00:15:56.064 cpu : usr=99.19%, sys=0.17%, ctx=178, majf=0, minf=5565 00:15:56.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:56.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.064 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:56.064 issued rwts: total=65142,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.064 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:56.064 00:15:56.064 Run status group 0 (all jobs): 00:15:56.064 READ: bw=21.2MiB/s (22.2MB/s), 10.6MiB/s-10.7MiB/s (11.1MB/s-11.2MB/s), io=509MiB (534MB), run=23845-24003msec 00:15:56.064 WRITE: bw=24.8MiB/s (26.0MB/s), 12.4MiB/s-16.0MiB/s (13.0MB/s-16.8MB/s), io=512MiB (537MB), run=15960-20651msec 00:15:56.064 ----------------------------------------------------- 00:15:56.064 Suppressions used: 00:15:56.064 count bytes template 00:15:56.064 2 10 /usr/src/fio/parse.c 00:15:56.064 3 288 /usr/src/fio/iolog.c 00:15:56.064 1 8 libtcmalloc_minimal.so 00:15:56.064 1 904 libcrypto.so 00:15:56.064 ----------------------------------------------------- 00:15:56.064 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:56.064 00:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:56.064 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:56.064 fio-3.35 00:15:56.064 Starting 1 thread 00:16:10.943 00:16:10.943 test: (groupid=0, jobs=1): err= 0: pid=84438: Thu Nov 21 00:05:01 2024 00:16:10.943 read: IOPS=7256, BW=28.3MiB/s (29.7MB/s)(255MiB/8985msec) 00:16:10.943 slat (usec): min=3, max=331, avg= 6.20, stdev= 3.07 00:16:10.943 clat (usec): min=658, max=42421, avg=17627.48, stdev=3320.44 00:16:10.943 lat (usec): min=663, max=42425, avg=17633.68, stdev=3321.00 00:16:10.943 clat percentiles (usec): 00:16:10.943 | 1.00th=[13698], 5.00th=[14091], 10.00th=[14222], 20.00th=[14746], 00:16:10.943 | 30.00th=[15664], 40.00th=[16188], 50.00th=[16909], 60.00th=[17695], 00:16:10.943 | 70.00th=[18482], 80.00th=[20055], 90.00th=[22152], 95.00th=[23725], 00:16:10.943 | 99.00th=[28705], 99.50th=[30802], 99.90th=[38536], 99.95th=[39584], 00:16:10.943 | 99.99th=[42206] 00:16:10.943 write: IOPS=12.1k, BW=47.2MiB/s (49.5MB/s)(256MiB/5426msec); 0 zone resets 00:16:10.943 slat (usec): min=4, max=1942, avg= 8.00, stdev=10.75 00:16:10.943 clat (usec): min=453, max=59003, avg=10542.50, stdev=10762.19 00:16:10.943 lat (usec): min=458, max=59011, avg=10550.49, stdev=10762.48 00:16:10.943 clat percentiles (usec): 00:16:10.943 | 1.00th=[ 611], 5.00th=[ 734], 10.00th=[ 807], 20.00th=[ 930], 00:16:10.943 | 30.00th=[ 1090], 40.00th=[ 2376], 50.00th=[ 9110], 60.00th=[11731], 00:16:10.943 | 70.00th=[14222], 80.00th=[17171], 90.00th=[25822], 95.00th=[28181], 00:16:10.943 | 99.00th=[49021], 99.50th=[52167], 99.90th=[56361], 99.95th=[57410], 00:16:10.943 | 99.99th=[58983] 00:16:10.943 bw ( KiB/s): min=30922, max=66456, per=98.64%, avg=47656.91, stdev=12330.38, samples=11 00:16:10.943 iops : min= 7730, max=16614, avg=11914.18, stdev=3082.66, samples=11 00:16:10.943 lat (usec) : 500=0.01%, 750=2.93%, 1000=9.75% 00:16:10.943 lat (msec) : 2=6.78%, 4=1.45%, 10=5.63%, 20=55.00%, 50=18.07% 00:16:10.943 lat (msec) : 100=0.38% 00:16:10.943 cpu : usr=98.01%, sys=0.47%, ctx=48, majf=0, minf=5577 00:16:10.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:10.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.944 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:10.944 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:10.944 00:16:10.944 Run status group 0 (all jobs): 00:16:10.944 READ: bw=28.3MiB/s (29.7MB/s), 28.3MiB/s-28.3MiB/s (29.7MB/s-29.7MB/s), io=255MiB (267MB), run=8985-8985msec 00:16:10.944 WRITE: bw=47.2MiB/s (49.5MB/s), 47.2MiB/s-47.2MiB/s (49.5MB/s-49.5MB/s), io=256MiB (268MB), run=5426-5426msec 00:16:11.886 ----------------------------------------------------- 00:16:11.886 Suppressions used: 00:16:11.886 count bytes template 00:16:11.886 1 5 /usr/src/fio/parse.c 00:16:11.886 2 192 /usr/src/fio/iolog.c 00:16:11.886 1 8 libtcmalloc_minimal.so 00:16:11.886 1 904 libcrypto.so 00:16:11.886 ----------------------------------------------------- 00:16:11.886 00:16:11.886 00:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:11.886 00:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:11.886 00:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:12.145 00:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:12.145 Remove shared memory files 00:16:12.145 00:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:12.145 00:05:02 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:12.145 00:05:02 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:12.145 00:05:02 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:12.145 00:05:02 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69450 /dev/shm/spdk_tgt_trace.pid82818 00:16:12.145 00:05:02 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:12.145 00:05:02 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:12.145 ************************************ 00:16:12.145 END TEST ftl_fio_basic 00:16:12.145 ************************************ 00:16:12.145 00:16:12.145 real 1m0.149s 00:16:12.145 user 2m14.962s 00:16:12.145 sys 0m2.776s 00:16:12.145 00:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:12.145 00:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:12.145 00:05:02 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:12.145 00:05:02 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:12.145 00:05:02 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:12.145 00:05:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:12.145 ************************************ 00:16:12.145 START TEST ftl_bdevperf 00:16:12.145 ************************************ 00:16:12.145 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:12.145 * Looking for test storage... 00:16:12.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:12.145 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:12.145 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:12.145 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:12.406 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:12.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.407 --rc genhtml_branch_coverage=1 00:16:12.407 --rc genhtml_function_coverage=1 00:16:12.407 --rc genhtml_legend=1 00:16:12.407 --rc geninfo_all_blocks=1 00:16:12.407 --rc geninfo_unexecuted_blocks=1 00:16:12.407 00:16:12.407 ' 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:12.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.407 --rc genhtml_branch_coverage=1 00:16:12.407 --rc genhtml_function_coverage=1 00:16:12.407 --rc genhtml_legend=1 00:16:12.407 --rc geninfo_all_blocks=1 00:16:12.407 --rc geninfo_unexecuted_blocks=1 00:16:12.407 00:16:12.407 ' 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:12.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.407 --rc genhtml_branch_coverage=1 00:16:12.407 --rc genhtml_function_coverage=1 00:16:12.407 --rc genhtml_legend=1 00:16:12.407 --rc geninfo_all_blocks=1 00:16:12.407 --rc geninfo_unexecuted_blocks=1 00:16:12.407 00:16:12.407 ' 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:12.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.407 --rc genhtml_branch_coverage=1 00:16:12.407 --rc genhtml_function_coverage=1 00:16:12.407 --rc genhtml_legend=1 00:16:12.407 --rc geninfo_all_blocks=1 00:16:12.407 --rc geninfo_unexecuted_blocks=1 00:16:12.407 00:16:12.407 ' 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84677 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84677 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 84677 ']' 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:12.407 00:05:02 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:12.407 [2024-11-21 00:05:02.670960] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:12.407 [2024-11-21 00:05:02.671360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84677 ] 00:16:12.407 [2024-11-21 00:05:02.816383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.668 [2024-11-21 00:05:02.890761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.239 00:05:03 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:13.239 00:05:03 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:13.239 00:05:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:13.239 00:05:03 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:13.239 00:05:03 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:13.239 00:05:03 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:13.239 00:05:03 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:13.239 00:05:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:13.500 00:05:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:13.500 00:05:03 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:13.500 00:05:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:13.500 00:05:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:13.500 00:05:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:13.500 00:05:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:13.500 00:05:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:13.500 00:05:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:13.760 00:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:13.760 { 00:16:13.760 "name": "nvme0n1", 00:16:13.760 "aliases": [ 00:16:13.760 "cb896bef-b8ee-4b6d-b56f-e12b2595c187" 00:16:13.760 ], 00:16:13.760 "product_name": "NVMe disk", 00:16:13.760 "block_size": 4096, 00:16:13.760 "num_blocks": 1310720, 00:16:13.760 "uuid": "cb896bef-b8ee-4b6d-b56f-e12b2595c187", 00:16:13.760 "numa_id": -1, 00:16:13.760 "assigned_rate_limits": { 00:16:13.760 "rw_ios_per_sec": 0, 00:16:13.760 "rw_mbytes_per_sec": 0, 00:16:13.760 "r_mbytes_per_sec": 0, 00:16:13.760 "w_mbytes_per_sec": 0 00:16:13.760 }, 00:16:13.760 "claimed": true, 00:16:13.760 "claim_type": "read_many_write_one", 00:16:13.760 "zoned": false, 00:16:13.760 "supported_io_types": { 00:16:13.760 "read": true, 00:16:13.760 "write": true, 00:16:13.760 "unmap": true, 00:16:13.760 "flush": true, 00:16:13.760 "reset": true, 00:16:13.760 "nvme_admin": true, 00:16:13.760 "nvme_io": true, 00:16:13.760 "nvme_io_md": false, 00:16:13.760 "write_zeroes": true, 00:16:13.760 "zcopy": false, 00:16:13.760 "get_zone_info": false, 00:16:13.760 "zone_management": false, 00:16:13.760 "zone_append": false, 00:16:13.760 "compare": true, 00:16:13.760 "compare_and_write": false, 00:16:13.760 "abort": true, 00:16:13.760 "seek_hole": false, 00:16:13.760 "seek_data": false, 00:16:13.760 "copy": true, 00:16:13.760 "nvme_iov_md": false 00:16:13.760 }, 00:16:13.760 "driver_specific": { 00:16:13.760 "nvme": [ 00:16:13.760 { 00:16:13.760 "pci_address": "0000:00:11.0", 00:16:13.760 "trid": { 00:16:13.760 "trtype": "PCIe", 00:16:13.761 "traddr": "0000:00:11.0" 00:16:13.761 }, 00:16:13.761 "ctrlr_data": { 00:16:13.761 "cntlid": 0, 00:16:13.761 "vendor_id": "0x1b36", 00:16:13.761 "model_number": "QEMU NVMe Ctrl", 00:16:13.761 "serial_number": "12341", 00:16:13.761 "firmware_revision": "8.0.0", 00:16:13.761 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:13.761 "oacs": { 00:16:13.761 "security": 0, 00:16:13.761 "format": 1, 00:16:13.761 "firmware": 0, 00:16:13.761 "ns_manage": 1 00:16:13.761 }, 00:16:13.761 "multi_ctrlr": false, 00:16:13.761 "ana_reporting": false 00:16:13.761 }, 00:16:13.761 "vs": { 00:16:13.761 "nvme_version": "1.4" 00:16:13.761 }, 00:16:13.761 "ns_data": { 00:16:13.761 "id": 1, 00:16:13.761 "can_share": false 00:16:13.761 } 00:16:13.761 } 00:16:13.761 ], 00:16:13.761 "mp_policy": "active_passive" 00:16:13.761 } 00:16:13.761 } 00:16:13.761 ]' 00:16:13.761 00:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:13.761 00:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:13.761 00:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:13.761 00:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:13.761 00:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:13.761 00:05:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:13.761 00:05:04 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:13.761 00:05:04 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:13.761 00:05:04 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:13.761 00:05:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:13.761 00:05:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:14.020 00:05:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=137f9d3d-0bea-4000-9744-25aad81576ac 00:16:14.020 00:05:04 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:14.020 00:05:04 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 137f9d3d-0bea-4000-9744-25aad81576ac 00:16:14.279 00:05:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:14.538 00:05:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=67ce800c-6c4b-4327-b246-50c70245d9c0 00:16:14.538 00:05:04 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 67ce800c-6c4b-4327-b246-50c70245d9c0 00:16:14.796 00:05:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=e88f650a-fd2b-47af-a883-47df8ae62a28 00:16:14.796 00:05:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e88f650a-fd2b-47af-a883-47df8ae62a28 00:16:14.796 00:05:05 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:14.796 00:05:05 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:14.796 00:05:05 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=e88f650a-fd2b-47af-a883-47df8ae62a28 00:16:14.796 00:05:05 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:14.796 00:05:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size e88f650a-fd2b-47af-a883-47df8ae62a28 00:16:14.796 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=e88f650a-fd2b-47af-a883-47df8ae62a28 00:16:14.796 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:14.796 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:14.796 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:14.796 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e88f650a-fd2b-47af-a883-47df8ae62a28 00:16:15.054 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:15.054 { 00:16:15.054 "name": "e88f650a-fd2b-47af-a883-47df8ae62a28", 00:16:15.054 "aliases": [ 00:16:15.054 "lvs/nvme0n1p0" 00:16:15.054 ], 00:16:15.054 "product_name": "Logical Volume", 00:16:15.054 "block_size": 4096, 00:16:15.054 "num_blocks": 26476544, 00:16:15.054 "uuid": "e88f650a-fd2b-47af-a883-47df8ae62a28", 00:16:15.054 "assigned_rate_limits": { 00:16:15.054 "rw_ios_per_sec": 0, 00:16:15.054 "rw_mbytes_per_sec": 0, 00:16:15.054 "r_mbytes_per_sec": 0, 00:16:15.054 "w_mbytes_per_sec": 0 00:16:15.054 }, 00:16:15.054 "claimed": false, 00:16:15.054 "zoned": false, 00:16:15.054 "supported_io_types": { 00:16:15.054 "read": true, 00:16:15.054 "write": true, 00:16:15.054 "unmap": true, 00:16:15.054 "flush": false, 00:16:15.054 "reset": true, 00:16:15.054 "nvme_admin": false, 00:16:15.054 "nvme_io": false, 00:16:15.054 "nvme_io_md": false, 00:16:15.054 "write_zeroes": true, 00:16:15.054 "zcopy": false, 00:16:15.054 "get_zone_info": false, 00:16:15.054 "zone_management": false, 00:16:15.054 "zone_append": false, 00:16:15.054 "compare": false, 00:16:15.054 "compare_and_write": false, 00:16:15.054 "abort": false, 00:16:15.054 "seek_hole": true, 00:16:15.054 "seek_data": true, 00:16:15.054 "copy": false, 00:16:15.054 "nvme_iov_md": false 00:16:15.054 }, 00:16:15.054 "driver_specific": { 00:16:15.054 "lvol": { 00:16:15.054 "lvol_store_uuid": "67ce800c-6c4b-4327-b246-50c70245d9c0", 00:16:15.054 "base_bdev": "nvme0n1", 00:16:15.054 "thin_provision": true, 00:16:15.054 "num_allocated_clusters": 0, 00:16:15.055 "snapshot": false, 00:16:15.055 "clone": false, 00:16:15.055 "esnap_clone": false 00:16:15.055 } 00:16:15.055 } 00:16:15.055 } 00:16:15.055 ]' 00:16:15.055 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:15.055 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:15.055 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:15.055 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:15.055 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:15.055 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:15.055 00:05:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:15.055 00:05:05 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:15.055 00:05:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:15.314 00:05:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:15.314 00:05:05 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:15.314 00:05:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size e88f650a-fd2b-47af-a883-47df8ae62a28 00:16:15.314 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=e88f650a-fd2b-47af-a883-47df8ae62a28 00:16:15.314 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:15.314 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:15.314 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:15.314 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e88f650a-fd2b-47af-a883-47df8ae62a28 00:16:15.574 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:15.574 { 00:16:15.574 "name": "e88f650a-fd2b-47af-a883-47df8ae62a28", 00:16:15.574 "aliases": [ 00:16:15.574 "lvs/nvme0n1p0" 00:16:15.574 ], 00:16:15.574 "product_name": "Logical Volume", 00:16:15.574 "block_size": 4096, 00:16:15.574 "num_blocks": 26476544, 00:16:15.574 "uuid": "e88f650a-fd2b-47af-a883-47df8ae62a28", 00:16:15.574 "assigned_rate_limits": { 00:16:15.574 "rw_ios_per_sec": 0, 00:16:15.574 "rw_mbytes_per_sec": 0, 00:16:15.574 "r_mbytes_per_sec": 0, 00:16:15.574 "w_mbytes_per_sec": 0 00:16:15.574 }, 00:16:15.574 "claimed": false, 00:16:15.574 "zoned": false, 00:16:15.574 "supported_io_types": { 00:16:15.574 "read": true, 00:16:15.574 "write": true, 00:16:15.574 "unmap": true, 00:16:15.574 "flush": false, 00:16:15.574 "reset": true, 00:16:15.574 "nvme_admin": false, 00:16:15.574 "nvme_io": false, 00:16:15.574 "nvme_io_md": false, 00:16:15.574 "write_zeroes": true, 00:16:15.574 "zcopy": false, 00:16:15.574 "get_zone_info": false, 00:16:15.574 "zone_management": false, 00:16:15.574 "zone_append": false, 00:16:15.574 "compare": false, 00:16:15.574 "compare_and_write": false, 00:16:15.574 "abort": false, 00:16:15.574 "seek_hole": true, 00:16:15.574 "seek_data": true, 00:16:15.574 "copy": false, 00:16:15.574 "nvme_iov_md": false 00:16:15.574 }, 00:16:15.574 "driver_specific": { 00:16:15.574 "lvol": { 00:16:15.574 "lvol_store_uuid": "67ce800c-6c4b-4327-b246-50c70245d9c0", 00:16:15.574 "base_bdev": "nvme0n1", 00:16:15.574 "thin_provision": true, 00:16:15.574 "num_allocated_clusters": 0, 00:16:15.574 "snapshot": false, 00:16:15.574 "clone": false, 00:16:15.574 "esnap_clone": false 00:16:15.574 } 00:16:15.574 } 00:16:15.574 } 00:16:15.574 ]' 00:16:15.574 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:15.574 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:15.574 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:15.574 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:15.574 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:15.574 00:05:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:15.574 00:05:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:15.574 00:05:05 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:15.919 00:05:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:15.919 00:05:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size e88f650a-fd2b-47af-a883-47df8ae62a28 00:16:15.919 00:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=e88f650a-fd2b-47af-a883-47df8ae62a28 00:16:15.919 00:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:15.919 00:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:15.919 00:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:15.919 00:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e88f650a-fd2b-47af-a883-47df8ae62a28 00:16:15.919 00:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:15.919 { 00:16:15.919 "name": "e88f650a-fd2b-47af-a883-47df8ae62a28", 00:16:15.919 "aliases": [ 00:16:15.919 "lvs/nvme0n1p0" 00:16:15.919 ], 00:16:15.919 "product_name": "Logical Volume", 00:16:15.919 "block_size": 4096, 00:16:15.919 "num_blocks": 26476544, 00:16:15.919 "uuid": "e88f650a-fd2b-47af-a883-47df8ae62a28", 00:16:15.919 "assigned_rate_limits": { 00:16:15.919 "rw_ios_per_sec": 0, 00:16:15.919 "rw_mbytes_per_sec": 0, 00:16:15.919 "r_mbytes_per_sec": 0, 00:16:15.919 "w_mbytes_per_sec": 0 00:16:15.919 }, 00:16:15.919 "claimed": false, 00:16:15.919 "zoned": false, 00:16:15.919 "supported_io_types": { 00:16:15.919 "read": true, 00:16:15.919 "write": true, 00:16:15.919 "unmap": true, 00:16:15.919 "flush": false, 00:16:15.919 "reset": true, 00:16:15.919 "nvme_admin": false, 00:16:15.919 "nvme_io": false, 00:16:15.919 "nvme_io_md": false, 00:16:15.919 "write_zeroes": true, 00:16:15.919 "zcopy": false, 00:16:15.919 "get_zone_info": false, 00:16:15.919 "zone_management": false, 00:16:15.919 "zone_append": false, 00:16:15.919 "compare": false, 00:16:15.919 "compare_and_write": false, 00:16:15.919 "abort": false, 00:16:15.919 "seek_hole": true, 00:16:15.919 "seek_data": true, 00:16:15.919 "copy": false, 00:16:15.919 "nvme_iov_md": false 00:16:15.919 }, 00:16:15.919 "driver_specific": { 00:16:15.919 "lvol": { 00:16:15.919 "lvol_store_uuid": "67ce800c-6c4b-4327-b246-50c70245d9c0", 00:16:15.919 "base_bdev": "nvme0n1", 00:16:15.919 "thin_provision": true, 00:16:15.919 "num_allocated_clusters": 0, 00:16:15.919 "snapshot": false, 00:16:15.919 "clone": false, 00:16:15.919 "esnap_clone": false 00:16:15.919 } 00:16:15.919 } 00:16:15.919 } 00:16:15.919 ]' 00:16:15.919 00:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:16.191 00:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:16.191 00:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:16.191 00:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:16.191 00:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:16.191 00:05:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:16.191 00:05:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:16.191 00:05:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e88f650a-fd2b-47af-a883-47df8ae62a28 -c nvc0n1p0 --l2p_dram_limit 20 00:16:16.191 [2024-11-21 00:05:06.546407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.191 [2024-11-21 00:05:06.546563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:16.191 [2024-11-21 00:05:06.546587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:16.191 [2024-11-21 00:05:06.546599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.191 [2024-11-21 00:05:06.546651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.191 [2024-11-21 00:05:06.546659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:16.191 [2024-11-21 00:05:06.546669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:16.191 [2024-11-21 00:05:06.546675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.191 [2024-11-21 00:05:06.546693] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:16.191 [2024-11-21 00:05:06.546887] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:16.191 [2024-11-21 00:05:06.546902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.191 [2024-11-21 00:05:06.546908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:16.191 [2024-11-21 00:05:06.546917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:16:16.191 [2024-11-21 00:05:06.546923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.191 [2024-11-21 00:05:06.546975] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bcd3eba6-cf5c-4e79-a489-3957b19bde63 00:16:16.191 [2024-11-21 00:05:06.548253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.191 [2024-11-21 00:05:06.548285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:16.191 [2024-11-21 00:05:06.548293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:16.191 [2024-11-21 00:05:06.548316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.191 [2024-11-21 00:05:06.555281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.191 [2024-11-21 00:05:06.555318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:16.191 [2024-11-21 00:05:06.555327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.934 ms 00:16:16.191 [2024-11-21 00:05:06.555341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.191 [2024-11-21 00:05:06.555398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.191 [2024-11-21 00:05:06.555412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:16.191 [2024-11-21 00:05:06.555419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:16.191 [2024-11-21 00:05:06.555431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.191 [2024-11-21 00:05:06.555471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.191 [2024-11-21 00:05:06.555483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:16.191 [2024-11-21 00:05:06.555489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:16.191 [2024-11-21 00:05:06.555500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.191 [2024-11-21 00:05:06.555515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:16.191 [2024-11-21 00:05:06.557158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.191 [2024-11-21 00:05:06.557187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:16.191 [2024-11-21 00:05:06.557207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:16:16.191 [2024-11-21 00:05:06.557217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.191 [2024-11-21 00:05:06.557258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.191 [2024-11-21 00:05:06.557272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:16.191 [2024-11-21 00:05:06.557285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:16.191 [2024-11-21 00:05:06.557317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.191 [2024-11-21 00:05:06.557338] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:16.191 [2024-11-21 00:05:06.557466] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:16.191 [2024-11-21 00:05:06.557480] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:16.191 [2024-11-21 00:05:06.557489] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:16.191 [2024-11-21 00:05:06.557499] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:16.191 [2024-11-21 00:05:06.557506] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:16.191 [2024-11-21 00:05:06.557515] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:16.191 [2024-11-21 00:05:06.557522] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:16.191 [2024-11-21 00:05:06.557530] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:16.191 [2024-11-21 00:05:06.557536] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:16.191 [2024-11-21 00:05:06.557546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.191 [2024-11-21 00:05:06.557551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:16.191 [2024-11-21 00:05:06.557561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:16:16.191 [2024-11-21 00:05:06.557567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.191 [2024-11-21 00:05:06.557633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.191 [2024-11-21 00:05:06.557640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:16.191 [2024-11-21 00:05:06.557648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:16.191 [2024-11-21 00:05:06.557653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.191 [2024-11-21 00:05:06.557725] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:16.191 [2024-11-21 00:05:06.557738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:16.191 [2024-11-21 00:05:06.557749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:16.191 [2024-11-21 00:05:06.557757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.191 [2024-11-21 00:05:06.557770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:16.191 [2024-11-21 00:05:06.557775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:16.191 [2024-11-21 00:05:06.557782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:16.191 [2024-11-21 00:05:06.557787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:16.191 [2024-11-21 00:05:06.557795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:16.191 [2024-11-21 00:05:06.557800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:16.191 [2024-11-21 00:05:06.557808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:16.192 [2024-11-21 00:05:06.557813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:16.192 [2024-11-21 00:05:06.557822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:16.192 [2024-11-21 00:05:06.557830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:16.192 [2024-11-21 00:05:06.557838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:16.192 [2024-11-21 00:05:06.557843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.192 [2024-11-21 00:05:06.557851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:16.192 [2024-11-21 00:05:06.557857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:16.192 [2024-11-21 00:05:06.557867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.192 [2024-11-21 00:05:06.557873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:16.192 [2024-11-21 00:05:06.557880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:16.192 [2024-11-21 00:05:06.557886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:16.192 [2024-11-21 00:05:06.557893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:16.192 [2024-11-21 00:05:06.557901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:16.192 [2024-11-21 00:05:06.557909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:16.192 [2024-11-21 00:05:06.557917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:16.192 [2024-11-21 00:05:06.557926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:16.192 [2024-11-21 00:05:06.557933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:16.192 [2024-11-21 00:05:06.557942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:16.192 [2024-11-21 00:05:06.557948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:16.192 [2024-11-21 00:05:06.557957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:16.192 [2024-11-21 00:05:06.557963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:16.192 [2024-11-21 00:05:06.557971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:16.192 [2024-11-21 00:05:06.557977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:16.192 [2024-11-21 00:05:06.557984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:16.192 [2024-11-21 00:05:06.557990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:16.192 [2024-11-21 00:05:06.557997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:16.192 [2024-11-21 00:05:06.558004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:16.192 [2024-11-21 00:05:06.558011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:16.192 [2024-11-21 00:05:06.558017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.192 [2024-11-21 00:05:06.558024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:16.192 [2024-11-21 00:05:06.558031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:16.192 [2024-11-21 00:05:06.558038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.192 [2024-11-21 00:05:06.558044] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:16.192 [2024-11-21 00:05:06.558054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:16.192 [2024-11-21 00:05:06.558061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:16.192 [2024-11-21 00:05:06.558069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.192 [2024-11-21 00:05:06.558075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:16.192 [2024-11-21 00:05:06.558083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:16.192 [2024-11-21 00:05:06.558089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:16.192 [2024-11-21 00:05:06.558097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:16.192 [2024-11-21 00:05:06.558102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:16.192 [2024-11-21 00:05:06.558109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:16.192 [2024-11-21 00:05:06.558118] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:16.192 [2024-11-21 00:05:06.558129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:16.192 [2024-11-21 00:05:06.558136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:16.192 [2024-11-21 00:05:06.558146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:16.192 [2024-11-21 00:05:06.558155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:16.192 [2024-11-21 00:05:06.558164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:16.192 [2024-11-21 00:05:06.558170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:16.192 [2024-11-21 00:05:06.558180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:16.192 [2024-11-21 00:05:06.558187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:16.192 [2024-11-21 00:05:06.558198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:16.192 [2024-11-21 00:05:06.558203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:16.192 [2024-11-21 00:05:06.558210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:16.192 [2024-11-21 00:05:06.558215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:16.192 [2024-11-21 00:05:06.558223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:16.192 [2024-11-21 00:05:06.558230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:16.192 [2024-11-21 00:05:06.558236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:16.192 [2024-11-21 00:05:06.558242] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:16.192 [2024-11-21 00:05:06.558249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:16.192 [2024-11-21 00:05:06.558256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:16.192 [2024-11-21 00:05:06.558263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:16.192 [2024-11-21 00:05:06.558268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:16.192 [2024-11-21 00:05:06.558276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:16.192 [2024-11-21 00:05:06.558281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.192 [2024-11-21 00:05:06.558292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:16.192 [2024-11-21 00:05:06.558314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:16:16.192 [2024-11-21 00:05:06.558321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.192 [2024-11-21 00:05:06.558348] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:16.192 [2024-11-21 00:05:06.558357] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:20.390 [2024-11-21 00:05:10.246778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.390 [2024-11-21 00:05:10.246882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:20.390 [2024-11-21 00:05:10.246901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3688.413 ms 00:16:20.390 [2024-11-21 00:05:10.246915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.390 [2024-11-21 00:05:10.276951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.390 [2024-11-21 00:05:10.277046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:20.390 [2024-11-21 00:05:10.277070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.894 ms 00:16:20.390 [2024-11-21 00:05:10.277090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.390 [2024-11-21 00:05:10.277287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.390 [2024-11-21 00:05:10.277341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:20.390 [2024-11-21 00:05:10.277358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:16:20.390 [2024-11-21 00:05:10.277383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.390 [2024-11-21 00:05:10.293682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.390 [2024-11-21 00:05:10.293746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:20.390 [2024-11-21 00:05:10.293759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.210 ms 00:16:20.390 [2024-11-21 00:05:10.293772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.390 [2024-11-21 00:05:10.293804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.390 [2024-11-21 00:05:10.293816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:20.390 [2024-11-21 00:05:10.293826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:20.390 [2024-11-21 00:05:10.293836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.390 [2024-11-21 00:05:10.294597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.390 [2024-11-21 00:05:10.294638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:20.391 [2024-11-21 00:05:10.294653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:16:20.391 [2024-11-21 00:05:10.294668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.294809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.294831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:20.391 [2024-11-21 00:05:10.294840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:16:20.391 [2024-11-21 00:05:10.294851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.305084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.305460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:20.391 [2024-11-21 00:05:10.305485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.213 ms 00:16:20.391 [2024-11-21 00:05:10.305497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.317079] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:20.391 [2024-11-21 00:05:10.326609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.326661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:20.391 [2024-11-21 00:05:10.326677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.970 ms 00:16:20.391 [2024-11-21 00:05:10.326686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.414544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.414764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:20.391 [2024-11-21 00:05:10.414797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.814 ms 00:16:20.391 [2024-11-21 00:05:10.414806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.415156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.415187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:20.391 [2024-11-21 00:05:10.415205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:16:20.391 [2024-11-21 00:05:10.415214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.421654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.421854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:20.391 [2024-11-21 00:05:10.421880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.392 ms 00:16:20.391 [2024-11-21 00:05:10.421890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.427473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.427530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:20.391 [2024-11-21 00:05:10.427545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.422 ms 00:16:20.391 [2024-11-21 00:05:10.427555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.427923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.427947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:20.391 [2024-11-21 00:05:10.427966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:16:20.391 [2024-11-21 00:05:10.427974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.474222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.474278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:20.391 [2024-11-21 00:05:10.474312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.186 ms 00:16:20.391 [2024-11-21 00:05:10.474322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.482607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.482667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:20.391 [2024-11-21 00:05:10.482686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.204 ms 00:16:20.391 [2024-11-21 00:05:10.482696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.488790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.488839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:20.391 [2024-11-21 00:05:10.488853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.040 ms 00:16:20.391 [2024-11-21 00:05:10.488861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.495521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.495735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:20.391 [2024-11-21 00:05:10.495764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.607 ms 00:16:20.391 [2024-11-21 00:05:10.495773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.495924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.495953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:20.391 [2024-11-21 00:05:10.495977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:20.391 [2024-11-21 00:05:10.495986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.496092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.391 [2024-11-21 00:05:10.496105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:20.391 [2024-11-21 00:05:10.496117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:20.391 [2024-11-21 00:05:10.496125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.391 [2024-11-21 00:05:10.497526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3950.492 ms, result 0 00:16:20.391 { 00:16:20.391 "name": "ftl0", 00:16:20.391 "uuid": "bcd3eba6-cf5c-4e79-a489-3957b19bde63" 00:16:20.391 } 00:16:20.391 00:05:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:20.391 00:05:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:20.391 00:05:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:20.391 00:05:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:20.652 [2024-11-21 00:05:10.820868] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:20.652 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:20.652 Zero copy mechanism will not be used. 00:16:20.652 Running I/O for 4 seconds... 00:16:22.523 694.00 IOPS, 46.09 MiB/s [2024-11-21T00:05:13.878Z] 725.50 IOPS, 48.18 MiB/s [2024-11-21T00:05:15.251Z] 740.00 IOPS, 49.14 MiB/s [2024-11-21T00:05:15.251Z] 753.25 IOPS, 50.02 MiB/s 00:16:24.830 Latency(us) 00:16:24.830 [2024-11-21T00:05:15.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.830 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:24.830 ftl0 : 4.00 753.24 50.02 0.00 0.00 1406.39 392.27 2457.60 00:16:24.830 [2024-11-21T00:05:15.251Z] =================================================================================================================== 00:16:24.830 [2024-11-21T00:05:15.251Z] Total : 753.24 50.02 0.00 0.00 1406.39 392.27 2457.60 00:16:24.830 [2024-11-21 00:05:14.829012] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:24.830 { 00:16:24.830 "results": [ 00:16:24.830 { 00:16:24.830 "job": "ftl0", 00:16:24.830 "core_mask": "0x1", 00:16:24.830 "workload": "randwrite", 00:16:24.830 "status": "finished", 00:16:24.830 "queue_depth": 1, 00:16:24.830 "io_size": 69632, 00:16:24.830 "runtime": 4.001397, 00:16:24.830 "iops": 753.2369320014985, 00:16:24.830 "mibps": 50.01964001572451, 00:16:24.830 "io_failed": 0, 00:16:24.830 "io_timeout": 0, 00:16:24.830 "avg_latency_us": 1406.3854137103772, 00:16:24.830 "min_latency_us": 392.27076923076925, 00:16:24.830 "max_latency_us": 2457.6 00:16:24.830 } 00:16:24.830 ], 00:16:24.830 "core_count": 1 00:16:24.830 } 00:16:24.830 00:05:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:24.830 [2024-11-21 00:05:14.935472] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:24.830 Running I/O for 4 seconds... 00:16:26.714 7394.00 IOPS, 28.88 MiB/s [2024-11-21T00:05:18.078Z] 6188.00 IOPS, 24.17 MiB/s [2024-11-21T00:05:19.021Z] 5712.67 IOPS, 22.32 MiB/s [2024-11-21T00:05:19.021Z] 5505.25 IOPS, 21.50 MiB/s 00:16:28.600 Latency(us) 00:16:28.600 [2024-11-21T00:05:19.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.600 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.600 ftl0 : 4.03 5490.69 21.45 0.00 0.00 23219.74 278.84 45976.02 00:16:28.600 [2024-11-21T00:05:19.021Z] =================================================================================================================== 00:16:28.600 [2024-11-21T00:05:19.021Z] Total : 5490.69 21.45 0.00 0.00 23219.74 0.00 45976.02 00:16:28.600 [2024-11-21 00:05:18.975759] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:28.600 { 00:16:28.600 "results": [ 00:16:28.600 { 00:16:28.600 "job": "ftl0", 00:16:28.600 "core_mask": "0x1", 00:16:28.600 "workload": "randwrite", 00:16:28.600 "status": "finished", 00:16:28.600 "queue_depth": 128, 00:16:28.600 "io_size": 4096, 00:16:28.600 "runtime": 4.033922, 00:16:28.600 "iops": 5490.686235380853, 00:16:28.600 "mibps": 21.447993106956456, 00:16:28.600 "io_failed": 0, 00:16:28.600 "io_timeout": 0, 00:16:28.600 "avg_latency_us": 23219.737770970733, 00:16:28.600 "min_latency_us": 278.8430769230769, 00:16:28.600 "max_latency_us": 45976.02461538462 00:16:28.600 } 00:16:28.600 ], 00:16:28.600 "core_count": 1 00:16:28.600 } 00:16:28.600 00:05:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:28.861 [2024-11-21 00:05:19.092548] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:28.861 Running I/O for 4 seconds... 00:16:30.744 4215.00 IOPS, 16.46 MiB/s [2024-11-21T00:05:22.104Z] 4244.50 IOPS, 16.58 MiB/s [2024-11-21T00:05:23.482Z] 4680.00 IOPS, 18.28 MiB/s [2024-11-21T00:05:23.482Z] 4983.75 IOPS, 19.47 MiB/s 00:16:33.061 Latency(us) 00:16:33.061 [2024-11-21T00:05:23.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.061 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:33.061 Verification LBA range: start 0x0 length 0x1400000 00:16:33.061 ftl0 : 4.01 5001.73 19.54 0.00 0.00 25526.82 343.43 39523.25 00:16:33.061 [2024-11-21T00:05:23.482Z] =================================================================================================================== 00:16:33.061 [2024-11-21T00:05:23.482Z] Total : 5001.73 19.54 0.00 0.00 25526.82 0.00 39523.25 00:16:33.061 { 00:16:33.061 "results": [ 00:16:33.061 { 00:16:33.061 "job": "ftl0", 00:16:33.061 "core_mask": "0x1", 00:16:33.061 "workload": "verify", 00:16:33.061 "status": "finished", 00:16:33.061 "verify_range": { 00:16:33.061 "start": 0, 00:16:33.061 "length": 20971520 00:16:33.061 }, 00:16:33.061 "queue_depth": 128, 00:16:33.061 "io_size": 4096, 00:16:33.061 "runtime": 4.011215, 00:16:33.061 "iops": 5001.7264095791425, 00:16:33.061 "mibps": 19.537993787418525, 00:16:33.061 "io_failed": 0, 00:16:33.061 "io_timeout": 0, 00:16:33.061 "avg_latency_us": 25526.82384396842, 00:16:33.061 "min_latency_us": 343.43384615384616, 00:16:33.061 "max_latency_us": 39523.24923076923 00:16:33.061 } 00:16:33.061 ], 00:16:33.061 "core_count": 1 00:16:33.061 } 00:16:33.061 [2024-11-21 00:05:23.112695] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:33.061 00:05:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:33.061 [2024-11-21 00:05:23.313000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.061 [2024-11-21 00:05:23.313154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:33.061 [2024-11-21 00:05:23.313174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:33.061 [2024-11-21 00:05:23.313181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.061 [2024-11-21 00:05:23.313211] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:33.061 [2024-11-21 00:05:23.313742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.061 [2024-11-21 00:05:23.313766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:33.061 [2024-11-21 00:05:23.313774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:16:33.061 [2024-11-21 00:05:23.313785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.061 [2024-11-21 00:05:23.316313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.061 [2024-11-21 00:05:23.316344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:33.061 [2024-11-21 00:05:23.316353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.513 ms 00:16:33.061 [2024-11-21 00:05:23.316363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.322 [2024-11-21 00:05:23.506836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.322 [2024-11-21 00:05:23.506878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:33.322 [2024-11-21 00:05:23.506890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 190.457 ms 00:16:33.322 [2024-11-21 00:05:23.506898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.322 [2024-11-21 00:05:23.511570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.322 [2024-11-21 00:05:23.511596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:33.322 [2024-11-21 00:05:23.511604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.643 ms 00:16:33.322 [2024-11-21 00:05:23.511612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.322 [2024-11-21 00:05:23.513859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.322 [2024-11-21 00:05:23.513890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:33.322 [2024-11-21 00:05:23.513898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:16:33.322 [2024-11-21 00:05:23.513906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.322 [2024-11-21 00:05:23.518907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.322 [2024-11-21 00:05:23.518938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:33.322 [2024-11-21 00:05:23.518947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.977 ms 00:16:33.322 [2024-11-21 00:05:23.518960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.322 [2024-11-21 00:05:23.519052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.322 [2024-11-21 00:05:23.519067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:33.322 [2024-11-21 00:05:23.519074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:33.322 [2024-11-21 00:05:23.519085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.322 [2024-11-21 00:05:23.521704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.322 [2024-11-21 00:05:23.521733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:33.322 [2024-11-21 00:05:23.521740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:16:33.322 [2024-11-21 00:05:23.521748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.322 [2024-11-21 00:05:23.524023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.322 [2024-11-21 00:05:23.524051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:33.322 [2024-11-21 00:05:23.524058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.252 ms 00:16:33.322 [2024-11-21 00:05:23.524066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.322 [2024-11-21 00:05:23.525670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.322 [2024-11-21 00:05:23.525702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:33.322 [2024-11-21 00:05:23.525710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.581 ms 00:16:33.322 [2024-11-21 00:05:23.525721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.322 [2024-11-21 00:05:23.527351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.322 [2024-11-21 00:05:23.527379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:33.322 [2024-11-21 00:05:23.527386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.588 ms 00:16:33.322 [2024-11-21 00:05:23.527393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.322 [2024-11-21 00:05:23.527415] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:33.322 [2024-11-21 00:05:23.527431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:33.322 [2024-11-21 00:05:23.527730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.527996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:33.323 [2024-11-21 00:05:23.528126] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:33.323 [2024-11-21 00:05:23.528133] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bcd3eba6-cf5c-4e79-a489-3957b19bde63 00:16:33.323 [2024-11-21 00:05:23.528141] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:33.323 [2024-11-21 00:05:23.528149] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:33.323 [2024-11-21 00:05:23.528156] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:33.323 [2024-11-21 00:05:23.528162] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:33.323 [2024-11-21 00:05:23.528171] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:33.323 [2024-11-21 00:05:23.528178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:33.323 [2024-11-21 00:05:23.528191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:33.323 [2024-11-21 00:05:23.528196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:33.323 [2024-11-21 00:05:23.528202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:33.323 [2024-11-21 00:05:23.528207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.323 [2024-11-21 00:05:23.528215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:33.323 [2024-11-21 00:05:23.528222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:16:33.323 [2024-11-21 00:05:23.528229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.323 [2024-11-21 00:05:23.530107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.323 [2024-11-21 00:05:23.530132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:33.323 [2024-11-21 00:05:23.530140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.860 ms 00:16:33.323 [2024-11-21 00:05:23.530148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.323 [2024-11-21 00:05:23.530241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.323 [2024-11-21 00:05:23.530252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:33.323 [2024-11-21 00:05:23.530259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:33.323 [2024-11-21 00:05:23.530268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.323 [2024-11-21 00:05:23.535547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.323 [2024-11-21 00:05:23.535663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:33.323 [2024-11-21 00:05:23.535675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.323 [2024-11-21 00:05:23.535684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.323 [2024-11-21 00:05:23.535728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.323 [2024-11-21 00:05:23.535736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:33.323 [2024-11-21 00:05:23.535743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.323 [2024-11-21 00:05:23.535750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.323 [2024-11-21 00:05:23.535801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.323 [2024-11-21 00:05:23.535815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:33.323 [2024-11-21 00:05:23.535822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.323 [2024-11-21 00:05:23.535829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.323 [2024-11-21 00:05:23.535840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.323 [2024-11-21 00:05:23.535848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:33.323 [2024-11-21 00:05:23.535854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.323 [2024-11-21 00:05:23.535864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.323 [2024-11-21 00:05:23.546579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.323 [2024-11-21 00:05:23.546706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:33.323 [2024-11-21 00:05:23.546719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.324 [2024-11-21 00:05:23.546727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.324 [2024-11-21 00:05:23.555614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.324 [2024-11-21 00:05:23.555648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:33.324 [2024-11-21 00:05:23.555657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.324 [2024-11-21 00:05:23.555665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.324 [2024-11-21 00:05:23.555721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.324 [2024-11-21 00:05:23.555732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:33.324 [2024-11-21 00:05:23.555739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.324 [2024-11-21 00:05:23.555746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.324 [2024-11-21 00:05:23.555776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.324 [2024-11-21 00:05:23.555786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:33.324 [2024-11-21 00:05:23.555793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.324 [2024-11-21 00:05:23.555803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.324 [2024-11-21 00:05:23.555861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.324 [2024-11-21 00:05:23.555871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:33.324 [2024-11-21 00:05:23.555879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.324 [2024-11-21 00:05:23.555887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.324 [2024-11-21 00:05:23.555910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.324 [2024-11-21 00:05:23.555919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:33.324 [2024-11-21 00:05:23.555925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.324 [2024-11-21 00:05:23.555933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.324 [2024-11-21 00:05:23.555966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.324 [2024-11-21 00:05:23.555975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:33.324 [2024-11-21 00:05:23.555986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.324 [2024-11-21 00:05:23.555993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.324 [2024-11-21 00:05:23.556036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.324 [2024-11-21 00:05:23.556046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:33.324 [2024-11-21 00:05:23.556053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.324 [2024-11-21 00:05:23.556062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.324 [2024-11-21 00:05:23.556178] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 243.143 ms, result 0 00:16:33.324 true 00:16:33.324 00:05:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84677 00:16:33.324 00:05:23 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 84677 ']' 00:16:33.324 00:05:23 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 84677 00:16:33.324 00:05:23 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:33.324 00:05:23 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.324 00:05:23 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84677 00:16:33.324 killing process with pid 84677 00:16:33.324 Received shutdown signal, test time was about 4.000000 seconds 00:16:33.324 00:16:33.324 Latency(us) 00:16:33.324 [2024-11-21T00:05:23.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.324 [2024-11-21T00:05:23.745Z] =================================================================================================================== 00:16:33.324 [2024-11-21T00:05:23.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:33.324 00:05:23 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:33.324 00:05:23 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:33.324 00:05:23 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84677' 00:16:33.324 00:05:23 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 84677 00:16:33.324 00:05:23 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 84677 00:16:35.870 Remove shared memory files 00:16:35.871 00:05:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:35.871 00:05:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:35.871 00:05:25 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:35.871 00:05:25 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:35.871 00:05:25 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:35.871 00:05:25 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:35.871 00:05:25 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:35.871 00:05:25 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:35.871 ************************************ 00:16:35.871 END TEST ftl_bdevperf 00:16:35.871 ************************************ 00:16:35.871 00:16:35.871 real 0m23.358s 00:16:35.871 user 0m25.926s 00:16:35.871 sys 0m1.060s 00:16:35.871 00:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.871 00:05:25 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:35.871 00:05:25 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:35.871 00:05:25 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:35.871 00:05:25 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.871 00:05:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:35.871 ************************************ 00:16:35.871 START TEST ftl_trim 00:16:35.871 ************************************ 00:16:35.871 00:05:25 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:35.871 * Looking for test storage... 00:16:35.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:35.871 00:05:25 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:35.871 00:05:25 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:35.871 00:05:25 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:35.871 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.871 00:05:26 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:35.871 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.871 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:35.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.871 --rc genhtml_branch_coverage=1 00:16:35.871 --rc genhtml_function_coverage=1 00:16:35.871 --rc genhtml_legend=1 00:16:35.871 --rc geninfo_all_blocks=1 00:16:35.871 --rc geninfo_unexecuted_blocks=1 00:16:35.871 00:16:35.871 ' 00:16:35.871 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:35.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.871 --rc genhtml_branch_coverage=1 00:16:35.871 --rc genhtml_function_coverage=1 00:16:35.871 --rc genhtml_legend=1 00:16:35.871 --rc geninfo_all_blocks=1 00:16:35.871 --rc geninfo_unexecuted_blocks=1 00:16:35.871 00:16:35.871 ' 00:16:35.871 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:35.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.871 --rc genhtml_branch_coverage=1 00:16:35.871 --rc genhtml_function_coverage=1 00:16:35.871 --rc genhtml_legend=1 00:16:35.871 --rc geninfo_all_blocks=1 00:16:35.871 --rc geninfo_unexecuted_blocks=1 00:16:35.871 00:16:35.871 ' 00:16:35.871 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:35.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.871 --rc genhtml_branch_coverage=1 00:16:35.871 --rc genhtml_function_coverage=1 00:16:35.871 --rc genhtml_legend=1 00:16:35.871 --rc geninfo_all_blocks=1 00:16:35.871 --rc geninfo_unexecuted_blocks=1 00:16:35.871 00:16:35.871 ' 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85023 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85023 00:16:35.871 00:05:26 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:35.871 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85023 ']' 00:16:35.871 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.871 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:35.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.871 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.871 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:35.871 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:35.871 [2024-11-21 00:05:26.130791] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:35.872 [2024-11-21 00:05:26.131150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85023 ] 00:16:35.872 [2024-11-21 00:05:26.268092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:36.133 [2024-11-21 00:05:26.342872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.133 [2024-11-21 00:05:26.343226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.133 [2024-11-21 00:05:26.343323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.706 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:36.706 00:05:26 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:36.706 00:05:26 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:36.706 00:05:26 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:36.706 00:05:26 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:36.706 00:05:26 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:36.706 00:05:26 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:36.706 00:05:26 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:36.967 00:05:27 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:36.967 00:05:27 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:36.967 00:05:27 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:36.967 00:05:27 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:36.967 00:05:27 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:36.967 00:05:27 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:36.967 00:05:27 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:36.967 00:05:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:37.228 00:05:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:37.228 { 00:16:37.228 "name": "nvme0n1", 00:16:37.228 "aliases": [ 00:16:37.228 "5052c683-8abf-47cf-9d84-9ee97d8edee3" 00:16:37.228 ], 00:16:37.228 "product_name": "NVMe disk", 00:16:37.228 "block_size": 4096, 00:16:37.228 "num_blocks": 1310720, 00:16:37.228 "uuid": "5052c683-8abf-47cf-9d84-9ee97d8edee3", 00:16:37.228 "numa_id": -1, 00:16:37.228 "assigned_rate_limits": { 00:16:37.228 "rw_ios_per_sec": 0, 00:16:37.228 "rw_mbytes_per_sec": 0, 00:16:37.228 "r_mbytes_per_sec": 0, 00:16:37.228 "w_mbytes_per_sec": 0 00:16:37.228 }, 00:16:37.228 "claimed": true, 00:16:37.228 "claim_type": "read_many_write_one", 00:16:37.228 "zoned": false, 00:16:37.228 "supported_io_types": { 00:16:37.228 "read": true, 00:16:37.228 "write": true, 00:16:37.228 "unmap": true, 00:16:37.228 "flush": true, 00:16:37.228 "reset": true, 00:16:37.228 "nvme_admin": true, 00:16:37.228 "nvme_io": true, 00:16:37.228 "nvme_io_md": false, 00:16:37.228 "write_zeroes": true, 00:16:37.228 "zcopy": false, 00:16:37.228 "get_zone_info": false, 00:16:37.228 "zone_management": false, 00:16:37.228 "zone_append": false, 00:16:37.228 "compare": true, 00:16:37.228 "compare_and_write": false, 00:16:37.228 "abort": true, 00:16:37.228 "seek_hole": false, 00:16:37.228 "seek_data": false, 00:16:37.228 "copy": true, 00:16:37.228 "nvme_iov_md": false 00:16:37.228 }, 00:16:37.228 "driver_specific": { 00:16:37.228 "nvme": [ 00:16:37.228 { 00:16:37.228 "pci_address": "0000:00:11.0", 00:16:37.228 "trid": { 00:16:37.228 "trtype": "PCIe", 00:16:37.228 "traddr": "0000:00:11.0" 00:16:37.228 }, 00:16:37.228 "ctrlr_data": { 00:16:37.228 "cntlid": 0, 00:16:37.228 "vendor_id": "0x1b36", 00:16:37.228 "model_number": "QEMU NVMe Ctrl", 00:16:37.228 "serial_number": "12341", 00:16:37.228 "firmware_revision": "8.0.0", 00:16:37.228 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:37.228 "oacs": { 00:16:37.228 "security": 0, 00:16:37.228 "format": 1, 00:16:37.228 "firmware": 0, 00:16:37.228 "ns_manage": 1 00:16:37.228 }, 00:16:37.228 "multi_ctrlr": false, 00:16:37.228 "ana_reporting": false 00:16:37.228 }, 00:16:37.228 "vs": { 00:16:37.228 "nvme_version": "1.4" 00:16:37.228 }, 00:16:37.228 "ns_data": { 00:16:37.228 "id": 1, 00:16:37.228 "can_share": false 00:16:37.228 } 00:16:37.228 } 00:16:37.228 ], 00:16:37.228 "mp_policy": "active_passive" 00:16:37.228 } 00:16:37.228 } 00:16:37.228 ]' 00:16:37.228 00:05:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:37.228 00:05:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:37.228 00:05:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:37.228 00:05:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:37.228 00:05:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:37.228 00:05:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:37.228 00:05:27 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:37.228 00:05:27 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:37.228 00:05:27 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:37.228 00:05:27 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:37.228 00:05:27 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:37.488 00:05:27 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=67ce800c-6c4b-4327-b246-50c70245d9c0 00:16:37.488 00:05:27 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:37.488 00:05:27 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 67ce800c-6c4b-4327-b246-50c70245d9c0 00:16:37.749 00:05:28 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:38.010 00:05:28 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=edb36aba-1296-43a4-b554-5e0360e2821c 00:16:38.010 00:05:28 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u edb36aba-1296-43a4-b554-5e0360e2821c 00:16:38.272 00:05:28 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=d50f7332-c220-4060-9706-e294383a870e 00:16:38.272 00:05:28 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d50f7332-c220-4060-9706-e294383a870e 00:16:38.272 00:05:28 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:38.272 00:05:28 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:38.272 00:05:28 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=d50f7332-c220-4060-9706-e294383a870e 00:16:38.272 00:05:28 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:38.272 00:05:28 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size d50f7332-c220-4060-9706-e294383a870e 00:16:38.272 00:05:28 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=d50f7332-c220-4060-9706-e294383a870e 00:16:38.272 00:05:28 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:38.272 00:05:28 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:38.272 00:05:28 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:38.272 00:05:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d50f7332-c220-4060-9706-e294383a870e 00:16:38.534 00:05:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:38.534 { 00:16:38.534 "name": "d50f7332-c220-4060-9706-e294383a870e", 00:16:38.534 "aliases": [ 00:16:38.534 "lvs/nvme0n1p0" 00:16:38.534 ], 00:16:38.534 "product_name": "Logical Volume", 00:16:38.534 "block_size": 4096, 00:16:38.534 "num_blocks": 26476544, 00:16:38.534 "uuid": "d50f7332-c220-4060-9706-e294383a870e", 00:16:38.534 "assigned_rate_limits": { 00:16:38.534 "rw_ios_per_sec": 0, 00:16:38.534 "rw_mbytes_per_sec": 0, 00:16:38.534 "r_mbytes_per_sec": 0, 00:16:38.534 "w_mbytes_per_sec": 0 00:16:38.534 }, 00:16:38.534 "claimed": false, 00:16:38.534 "zoned": false, 00:16:38.534 "supported_io_types": { 00:16:38.534 "read": true, 00:16:38.534 "write": true, 00:16:38.534 "unmap": true, 00:16:38.534 "flush": false, 00:16:38.534 "reset": true, 00:16:38.534 "nvme_admin": false, 00:16:38.534 "nvme_io": false, 00:16:38.534 "nvme_io_md": false, 00:16:38.534 "write_zeroes": true, 00:16:38.534 "zcopy": false, 00:16:38.534 "get_zone_info": false, 00:16:38.534 "zone_management": false, 00:16:38.534 "zone_append": false, 00:16:38.534 "compare": false, 00:16:38.534 "compare_and_write": false, 00:16:38.534 "abort": false, 00:16:38.534 "seek_hole": true, 00:16:38.534 "seek_data": true, 00:16:38.534 "copy": false, 00:16:38.534 "nvme_iov_md": false 00:16:38.534 }, 00:16:38.534 "driver_specific": { 00:16:38.534 "lvol": { 00:16:38.534 "lvol_store_uuid": "edb36aba-1296-43a4-b554-5e0360e2821c", 00:16:38.534 "base_bdev": "nvme0n1", 00:16:38.534 "thin_provision": true, 00:16:38.534 "num_allocated_clusters": 0, 00:16:38.534 "snapshot": false, 00:16:38.534 "clone": false, 00:16:38.534 "esnap_clone": false 00:16:38.534 } 00:16:38.534 } 00:16:38.534 } 00:16:38.534 ]' 00:16:38.534 00:05:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:38.534 00:05:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:38.534 00:05:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:38.534 00:05:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:38.534 00:05:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:38.534 00:05:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:38.534 00:05:28 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:38.534 00:05:28 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:38.534 00:05:28 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:38.793 00:05:29 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:38.793 00:05:29 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:38.793 00:05:29 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size d50f7332-c220-4060-9706-e294383a870e 00:16:38.793 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=d50f7332-c220-4060-9706-e294383a870e 00:16:38.793 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:38.793 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:38.793 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:38.793 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d50f7332-c220-4060-9706-e294383a870e 00:16:39.051 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:39.051 { 00:16:39.051 "name": "d50f7332-c220-4060-9706-e294383a870e", 00:16:39.051 "aliases": [ 00:16:39.051 "lvs/nvme0n1p0" 00:16:39.051 ], 00:16:39.051 "product_name": "Logical Volume", 00:16:39.051 "block_size": 4096, 00:16:39.051 "num_blocks": 26476544, 00:16:39.051 "uuid": "d50f7332-c220-4060-9706-e294383a870e", 00:16:39.051 "assigned_rate_limits": { 00:16:39.051 "rw_ios_per_sec": 0, 00:16:39.051 "rw_mbytes_per_sec": 0, 00:16:39.051 "r_mbytes_per_sec": 0, 00:16:39.051 "w_mbytes_per_sec": 0 00:16:39.051 }, 00:16:39.051 "claimed": false, 00:16:39.051 "zoned": false, 00:16:39.051 "supported_io_types": { 00:16:39.051 "read": true, 00:16:39.051 "write": true, 00:16:39.051 "unmap": true, 00:16:39.051 "flush": false, 00:16:39.052 "reset": true, 00:16:39.052 "nvme_admin": false, 00:16:39.052 "nvme_io": false, 00:16:39.052 "nvme_io_md": false, 00:16:39.052 "write_zeroes": true, 00:16:39.052 "zcopy": false, 00:16:39.052 "get_zone_info": false, 00:16:39.052 "zone_management": false, 00:16:39.052 "zone_append": false, 00:16:39.052 "compare": false, 00:16:39.052 "compare_and_write": false, 00:16:39.052 "abort": false, 00:16:39.052 "seek_hole": true, 00:16:39.052 "seek_data": true, 00:16:39.052 "copy": false, 00:16:39.052 "nvme_iov_md": false 00:16:39.052 }, 00:16:39.052 "driver_specific": { 00:16:39.052 "lvol": { 00:16:39.052 "lvol_store_uuid": "edb36aba-1296-43a4-b554-5e0360e2821c", 00:16:39.052 "base_bdev": "nvme0n1", 00:16:39.052 "thin_provision": true, 00:16:39.052 "num_allocated_clusters": 0, 00:16:39.052 "snapshot": false, 00:16:39.052 "clone": false, 00:16:39.052 "esnap_clone": false 00:16:39.052 } 00:16:39.052 } 00:16:39.052 } 00:16:39.052 ]' 00:16:39.052 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:39.052 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:39.052 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:39.052 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:39.052 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:39.052 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:39.052 00:05:29 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:39.052 00:05:29 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:39.311 00:05:29 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:39.311 00:05:29 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:39.311 00:05:29 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size d50f7332-c220-4060-9706-e294383a870e 00:16:39.311 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=d50f7332-c220-4060-9706-e294383a870e 00:16:39.311 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:39.311 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:39.311 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:39.311 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d50f7332-c220-4060-9706-e294383a870e 00:16:39.311 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:39.311 { 00:16:39.311 "name": "d50f7332-c220-4060-9706-e294383a870e", 00:16:39.311 "aliases": [ 00:16:39.311 "lvs/nvme0n1p0" 00:16:39.311 ], 00:16:39.311 "product_name": "Logical Volume", 00:16:39.311 "block_size": 4096, 00:16:39.311 "num_blocks": 26476544, 00:16:39.311 "uuid": "d50f7332-c220-4060-9706-e294383a870e", 00:16:39.311 "assigned_rate_limits": { 00:16:39.311 "rw_ios_per_sec": 0, 00:16:39.311 "rw_mbytes_per_sec": 0, 00:16:39.311 "r_mbytes_per_sec": 0, 00:16:39.311 "w_mbytes_per_sec": 0 00:16:39.311 }, 00:16:39.311 "claimed": false, 00:16:39.311 "zoned": false, 00:16:39.311 "supported_io_types": { 00:16:39.311 "read": true, 00:16:39.311 "write": true, 00:16:39.311 "unmap": true, 00:16:39.311 "flush": false, 00:16:39.311 "reset": true, 00:16:39.311 "nvme_admin": false, 00:16:39.311 "nvme_io": false, 00:16:39.311 "nvme_io_md": false, 00:16:39.311 "write_zeroes": true, 00:16:39.311 "zcopy": false, 00:16:39.311 "get_zone_info": false, 00:16:39.311 "zone_management": false, 00:16:39.311 "zone_append": false, 00:16:39.311 "compare": false, 00:16:39.311 "compare_and_write": false, 00:16:39.311 "abort": false, 00:16:39.311 "seek_hole": true, 00:16:39.311 "seek_data": true, 00:16:39.311 "copy": false, 00:16:39.311 "nvme_iov_md": false 00:16:39.311 }, 00:16:39.311 "driver_specific": { 00:16:39.311 "lvol": { 00:16:39.311 "lvol_store_uuid": "edb36aba-1296-43a4-b554-5e0360e2821c", 00:16:39.311 "base_bdev": "nvme0n1", 00:16:39.311 "thin_provision": true, 00:16:39.311 "num_allocated_clusters": 0, 00:16:39.311 "snapshot": false, 00:16:39.311 "clone": false, 00:16:39.311 "esnap_clone": false 00:16:39.311 } 00:16:39.311 } 00:16:39.311 } 00:16:39.311 ]' 00:16:39.311 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:39.571 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:39.571 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:39.571 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:39.571 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:39.571 00:05:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:39.571 00:05:29 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:39.571 00:05:29 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d50f7332-c220-4060-9706-e294383a870e -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:39.571 [2024-11-21 00:05:29.938500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.571 [2024-11-21 00:05:29.938635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:39.571 [2024-11-21 00:05:29.938652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:39.571 [2024-11-21 00:05:29.938671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.571 [2024-11-21 00:05:29.940710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.571 [2024-11-21 00:05:29.940742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:39.571 [2024-11-21 00:05:29.940750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.006 ms 00:16:39.571 [2024-11-21 00:05:29.940759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.571 [2024-11-21 00:05:29.940833] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:39.571 [2024-11-21 00:05:29.941025] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:39.571 [2024-11-21 00:05:29.941039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.571 [2024-11-21 00:05:29.941048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:39.571 [2024-11-21 00:05:29.941056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:16:39.571 [2024-11-21 00:05:29.941063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.571 [2024-11-21 00:05:29.941287] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2907dcd9-a39c-45f6-a091-f2f3bc6dbd9e 00:16:39.571 [2024-11-21 00:05:29.942529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.571 [2024-11-21 00:05:29.942554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:39.571 [2024-11-21 00:05:29.942564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:39.571 [2024-11-21 00:05:29.942570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.571 [2024-11-21 00:05:29.949236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.571 [2024-11-21 00:05:29.949260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:39.571 [2024-11-21 00:05:29.949270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.602 ms 00:16:39.571 [2024-11-21 00:05:29.949276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.571 [2024-11-21 00:05:29.949406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.571 [2024-11-21 00:05:29.949417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:39.571 [2024-11-21 00:05:29.949426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:39.571 [2024-11-21 00:05:29.949431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.571 [2024-11-21 00:05:29.949468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.571 [2024-11-21 00:05:29.949476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:39.571 [2024-11-21 00:05:29.949484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:39.571 [2024-11-21 00:05:29.949490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.571 [2024-11-21 00:05:29.949523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:39.571 [2024-11-21 00:05:29.951083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.571 [2024-11-21 00:05:29.951121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:39.571 [2024-11-21 00:05:29.951131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.565 ms 00:16:39.571 [2024-11-21 00:05:29.951138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.571 [2024-11-21 00:05:29.951183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.571 [2024-11-21 00:05:29.951193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:39.571 [2024-11-21 00:05:29.951199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:39.571 [2024-11-21 00:05:29.951208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.571 [2024-11-21 00:05:29.951234] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:39.571 [2024-11-21 00:05:29.951361] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:39.571 [2024-11-21 00:05:29.951373] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:39.571 [2024-11-21 00:05:29.951383] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:39.571 [2024-11-21 00:05:29.951392] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:39.571 [2024-11-21 00:05:29.951400] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:39.572 [2024-11-21 00:05:29.951407] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:39.572 [2024-11-21 00:05:29.951415] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:39.572 [2024-11-21 00:05:29.951431] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:39.572 [2024-11-21 00:05:29.951438] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:39.572 [2024-11-21 00:05:29.951444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.572 [2024-11-21 00:05:29.951451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:39.572 [2024-11-21 00:05:29.951458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:16:39.572 [2024-11-21 00:05:29.951466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.572 [2024-11-21 00:05:29.951546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.572 [2024-11-21 00:05:29.951555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:39.572 [2024-11-21 00:05:29.951561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:39.572 [2024-11-21 00:05:29.951568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.572 [2024-11-21 00:05:29.951668] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:39.572 [2024-11-21 00:05:29.951677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:39.572 [2024-11-21 00:05:29.951683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:39.572 [2024-11-21 00:05:29.951691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.572 [2024-11-21 00:05:29.951698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:39.572 [2024-11-21 00:05:29.951705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:39.572 [2024-11-21 00:05:29.951710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:39.572 [2024-11-21 00:05:29.951717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:39.572 [2024-11-21 00:05:29.951723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:39.572 [2024-11-21 00:05:29.951729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:39.572 [2024-11-21 00:05:29.951735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:39.572 [2024-11-21 00:05:29.951744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:39.572 [2024-11-21 00:05:29.951750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:39.572 [2024-11-21 00:05:29.951759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:39.572 [2024-11-21 00:05:29.951765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:39.572 [2024-11-21 00:05:29.951774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.572 [2024-11-21 00:05:29.951780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:39.572 [2024-11-21 00:05:29.951787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:39.572 [2024-11-21 00:05:29.951793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.572 [2024-11-21 00:05:29.951801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:39.572 [2024-11-21 00:05:29.951807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:39.572 [2024-11-21 00:05:29.951815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:39.572 [2024-11-21 00:05:29.951822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:39.572 [2024-11-21 00:05:29.951829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:39.572 [2024-11-21 00:05:29.951835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:39.572 [2024-11-21 00:05:29.951843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:39.572 [2024-11-21 00:05:29.951849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:39.572 [2024-11-21 00:05:29.951867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:39.572 [2024-11-21 00:05:29.951873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:39.572 [2024-11-21 00:05:29.951883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:39.572 [2024-11-21 00:05:29.951889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:39.572 [2024-11-21 00:05:29.951896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:39.572 [2024-11-21 00:05:29.951902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:39.572 [2024-11-21 00:05:29.951909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:39.572 [2024-11-21 00:05:29.951915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:39.572 [2024-11-21 00:05:29.951924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:39.572 [2024-11-21 00:05:29.951930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:39.572 [2024-11-21 00:05:29.951937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:39.572 [2024-11-21 00:05:29.951943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:39.572 [2024-11-21 00:05:29.951950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.572 [2024-11-21 00:05:29.951956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:39.572 [2024-11-21 00:05:29.951963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:39.572 [2024-11-21 00:05:29.951969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.572 [2024-11-21 00:05:29.951976] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:39.572 [2024-11-21 00:05:29.951982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:39.572 [2024-11-21 00:05:29.951992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:39.572 [2024-11-21 00:05:29.951998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.572 [2024-11-21 00:05:29.952016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:39.572 [2024-11-21 00:05:29.952022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:39.572 [2024-11-21 00:05:29.952030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:39.572 [2024-11-21 00:05:29.952036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:39.572 [2024-11-21 00:05:29.952043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:39.572 [2024-11-21 00:05:29.952050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:39.572 [2024-11-21 00:05:29.952061] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:39.572 [2024-11-21 00:05:29.952069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:39.572 [2024-11-21 00:05:29.952078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:39.572 [2024-11-21 00:05:29.952085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:39.572 [2024-11-21 00:05:29.952093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:39.572 [2024-11-21 00:05:29.952100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:39.572 [2024-11-21 00:05:29.952109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:39.572 [2024-11-21 00:05:29.952115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:39.572 [2024-11-21 00:05:29.952124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:39.572 [2024-11-21 00:05:29.952129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:39.572 [2024-11-21 00:05:29.952136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:39.572 [2024-11-21 00:05:29.952141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:39.572 [2024-11-21 00:05:29.952148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:39.572 [2024-11-21 00:05:29.952153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:39.572 [2024-11-21 00:05:29.952160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:39.572 [2024-11-21 00:05:29.952165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:39.572 [2024-11-21 00:05:29.952172] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:39.572 [2024-11-21 00:05:29.952178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:39.573 [2024-11-21 00:05:29.952186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:39.573 [2024-11-21 00:05:29.952191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:39.573 [2024-11-21 00:05:29.952198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:39.573 [2024-11-21 00:05:29.952204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:39.573 [2024-11-21 00:05:29.952211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.573 [2024-11-21 00:05:29.952216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:39.573 [2024-11-21 00:05:29.952227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:16:39.573 [2024-11-21 00:05:29.952233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.573 [2024-11-21 00:05:29.952293] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:39.573 [2024-11-21 00:05:29.952324] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:42.118 [2024-11-21 00:05:32.070961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.071154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:42.118 [2024-11-21 00:05:32.071245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2118.655 ms 00:16:42.118 [2024-11-21 00:05:32.071271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.092554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.092766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:42.118 [2024-11-21 00:05:32.092935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.148 ms 00:16:42.118 [2024-11-21 00:05:32.092977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.093227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.093272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:42.118 [2024-11-21 00:05:32.093409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:16:42.118 [2024-11-21 00:05:32.093465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.105046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.105166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:42.118 [2024-11-21 00:05:32.105229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.473 ms 00:16:42.118 [2024-11-21 00:05:32.105254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.105346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.105677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:42.118 [2024-11-21 00:05:32.105765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:42.118 [2024-11-21 00:05:32.105792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.106373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.106475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:42.118 [2024-11-21 00:05:32.106532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:16:42.118 [2024-11-21 00:05:32.106554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.106714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.106763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:42.118 [2024-11-21 00:05:32.106825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:16:42.118 [2024-11-21 00:05:32.106860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.113989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.114108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:42.118 [2024-11-21 00:05:32.114161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.063 ms 00:16:42.118 [2024-11-21 00:05:32.114184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.123211] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:42.118 [2024-11-21 00:05:32.140362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.140482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:42.118 [2024-11-21 00:05:32.140532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.033 ms 00:16:42.118 [2024-11-21 00:05:32.140576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.191516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.191639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:42.118 [2024-11-21 00:05:32.191694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.852 ms 00:16:42.118 [2024-11-21 00:05:32.191740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.191977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.192063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:42.118 [2024-11-21 00:05:32.192091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:42.118 [2024-11-21 00:05:32.192112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.196079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.196189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:42.118 [2024-11-21 00:05:32.196241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.922 ms 00:16:42.118 [2024-11-21 00:05:32.196310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.200052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.200158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:42.118 [2024-11-21 00:05:32.200241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.657 ms 00:16:42.118 [2024-11-21 00:05:32.200264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.200683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.200757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:42.118 [2024-11-21 00:05:32.200807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:16:42.118 [2024-11-21 00:05:32.200833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.237032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.237141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:42.118 [2024-11-21 00:05:32.237190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.131 ms 00:16:42.118 [2024-11-21 00:05:32.237224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.242585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.242695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:42.118 [2024-11-21 00:05:32.242750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.271 ms 00:16:42.118 [2024-11-21 00:05:32.242778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.246928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.246962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:42.118 [2024-11-21 00:05:32.246972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.010 ms 00:16:42.118 [2024-11-21 00:05:32.246982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.251255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.251290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:42.118 [2024-11-21 00:05:32.251314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.230 ms 00:16:42.118 [2024-11-21 00:05:32.251326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.251377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.251390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:42.118 [2024-11-21 00:05:32.251399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:42.118 [2024-11-21 00:05:32.251410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.251489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.118 [2024-11-21 00:05:32.251500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:42.118 [2024-11-21 00:05:32.251508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:42.118 [2024-11-21 00:05:32.251518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.118 [2024-11-21 00:05:32.252523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:42.118 [2024-11-21 00:05:32.253543] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL start{ 00:16:42.118 "name": "ftl0", 00:16:42.119 "uuid": "2907dcd9-a39c-45f6-a091-f2f3bc6dbd9e" 00:16:42.119 } 00:16:42.119 up', duration = 2313.637 ms, result 0 00:16:42.119 [2024-11-21 00:05:32.254617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:42.119 00:05:32 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:42.119 00:05:32 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:42.119 00:05:32 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:42.119 00:05:32 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:42.119 00:05:32 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:42.119 00:05:32 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:42.119 00:05:32 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:42.119 00:05:32 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:42.379 [ 00:16:42.379 { 00:16:42.379 "name": "ftl0", 00:16:42.380 "aliases": [ 00:16:42.380 "2907dcd9-a39c-45f6-a091-f2f3bc6dbd9e" 00:16:42.380 ], 00:16:42.380 "product_name": "FTL disk", 00:16:42.380 "block_size": 4096, 00:16:42.380 "num_blocks": 23592960, 00:16:42.380 "uuid": "2907dcd9-a39c-45f6-a091-f2f3bc6dbd9e", 00:16:42.380 "assigned_rate_limits": { 00:16:42.380 "rw_ios_per_sec": 0, 00:16:42.380 "rw_mbytes_per_sec": 0, 00:16:42.380 "r_mbytes_per_sec": 0, 00:16:42.380 "w_mbytes_per_sec": 0 00:16:42.380 }, 00:16:42.380 "claimed": false, 00:16:42.380 "zoned": false, 00:16:42.380 "supported_io_types": { 00:16:42.380 "read": true, 00:16:42.380 "write": true, 00:16:42.380 "unmap": true, 00:16:42.380 "flush": true, 00:16:42.380 "reset": false, 00:16:42.380 "nvme_admin": false, 00:16:42.380 "nvme_io": false, 00:16:42.380 "nvme_io_md": false, 00:16:42.380 "write_zeroes": true, 00:16:42.380 "zcopy": false, 00:16:42.380 "get_zone_info": false, 00:16:42.380 "zone_management": false, 00:16:42.380 "zone_append": false, 00:16:42.380 "compare": false, 00:16:42.380 "compare_and_write": false, 00:16:42.380 "abort": false, 00:16:42.380 "seek_hole": false, 00:16:42.380 "seek_data": false, 00:16:42.380 "copy": false, 00:16:42.380 "nvme_iov_md": false 00:16:42.380 }, 00:16:42.380 "driver_specific": { 00:16:42.380 "ftl": { 00:16:42.380 "base_bdev": "d50f7332-c220-4060-9706-e294383a870e", 00:16:42.380 "cache": "nvc0n1p0" 00:16:42.380 } 00:16:42.380 } 00:16:42.380 } 00:16:42.380 ] 00:16:42.380 00:05:32 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:42.380 00:05:32 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:42.380 00:05:32 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:42.641 00:05:32 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:42.641 00:05:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:42.902 00:05:33 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:42.902 { 00:16:42.902 "name": "ftl0", 00:16:42.902 "aliases": [ 00:16:42.902 "2907dcd9-a39c-45f6-a091-f2f3bc6dbd9e" 00:16:42.902 ], 00:16:42.902 "product_name": "FTL disk", 00:16:42.902 "block_size": 4096, 00:16:42.902 "num_blocks": 23592960, 00:16:42.902 "uuid": "2907dcd9-a39c-45f6-a091-f2f3bc6dbd9e", 00:16:42.902 "assigned_rate_limits": { 00:16:42.903 "rw_ios_per_sec": 0, 00:16:42.903 "rw_mbytes_per_sec": 0, 00:16:42.903 "r_mbytes_per_sec": 0, 00:16:42.903 "w_mbytes_per_sec": 0 00:16:42.903 }, 00:16:42.903 "claimed": false, 00:16:42.903 "zoned": false, 00:16:42.903 "supported_io_types": { 00:16:42.903 "read": true, 00:16:42.903 "write": true, 00:16:42.903 "unmap": true, 00:16:42.903 "flush": true, 00:16:42.903 "reset": false, 00:16:42.903 "nvme_admin": false, 00:16:42.903 "nvme_io": false, 00:16:42.903 "nvme_io_md": false, 00:16:42.903 "write_zeroes": true, 00:16:42.903 "zcopy": false, 00:16:42.903 "get_zone_info": false, 00:16:42.903 "zone_management": false, 00:16:42.903 "zone_append": false, 00:16:42.903 "compare": false, 00:16:42.903 "compare_and_write": false, 00:16:42.903 "abort": false, 00:16:42.903 "seek_hole": false, 00:16:42.903 "seek_data": false, 00:16:42.903 "copy": false, 00:16:42.903 "nvme_iov_md": false 00:16:42.903 }, 00:16:42.903 "driver_specific": { 00:16:42.903 "ftl": { 00:16:42.903 "base_bdev": "d50f7332-c220-4060-9706-e294383a870e", 00:16:42.903 "cache": "nvc0n1p0" 00:16:42.903 } 00:16:42.903 } 00:16:42.903 } 00:16:42.903 ]' 00:16:42.903 00:05:33 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:42.903 00:05:33 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:42.903 00:05:33 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:42.903 [2024-11-21 00:05:33.297180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.903 [2024-11-21 00:05:33.297354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:42.903 [2024-11-21 00:05:33.297422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:42.903 [2024-11-21 00:05:33.297447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.903 [2024-11-21 00:05:33.297528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:42.903 [2024-11-21 00:05:33.298117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.903 [2024-11-21 00:05:33.298213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:42.903 [2024-11-21 00:05:33.298263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:16:42.903 [2024-11-21 00:05:33.298292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.903 [2024-11-21 00:05:33.298893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.903 [2024-11-21 00:05:33.298967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:42.903 [2024-11-21 00:05:33.299014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:16:42.903 [2024-11-21 00:05:33.299042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.903 [2024-11-21 00:05:33.302992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.903 [2024-11-21 00:05:33.303072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:42.903 [2024-11-21 00:05:33.303086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.634 ms 00:16:42.903 [2024-11-21 00:05:33.303096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.903 [2024-11-21 00:05:33.310173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.903 [2024-11-21 00:05:33.310287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:42.903 [2024-11-21 00:05:33.310347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.028 ms 00:16:42.903 [2024-11-21 00:05:33.310428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.903 [2024-11-21 00:05:33.312552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.903 [2024-11-21 00:05:33.312661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:42.903 [2024-11-21 00:05:33.312711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.000 ms 00:16:42.903 [2024-11-21 00:05:33.312757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.903 [2024-11-21 00:05:33.318002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.903 [2024-11-21 00:05:33.318109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:42.903 [2024-11-21 00:05:33.318159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.163 ms 00:16:42.903 [2024-11-21 00:05:33.318206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.903 [2024-11-21 00:05:33.318444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.903 [2024-11-21 00:05:33.318481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:42.903 [2024-11-21 00:05:33.318547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:16:42.903 [2024-11-21 00:05:33.318575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.903 [2024-11-21 00:05:33.321149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.903 [2024-11-21 00:05:33.321261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:42.903 [2024-11-21 00:05:33.321333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.534 ms 00:16:43.166 [2024-11-21 00:05:33.321437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.166 [2024-11-21 00:05:33.323649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.166 [2024-11-21 00:05:33.323748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:43.166 [2024-11-21 00:05:33.323797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.154 ms 00:16:43.166 [2024-11-21 00:05:33.323870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.166 [2024-11-21 00:05:33.325690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.166 [2024-11-21 00:05:33.325790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:43.166 [2024-11-21 00:05:33.325838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:16:43.166 [2024-11-21 00:05:33.325880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.166 [2024-11-21 00:05:33.327425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.166 [2024-11-21 00:05:33.327522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:43.166 [2024-11-21 00:05:33.327571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:16:43.166 [2024-11-21 00:05:33.327594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.166 [2024-11-21 00:05:33.327713] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:43.166 [2024-11-21 00:05:33.327773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.327854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.327938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.327969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.328972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.329029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.329039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.329048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.329056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.329067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.329075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.329084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.329091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.329100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.329107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:43.166 [2024-11-21 00:05:33.329119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:43.167 [2024-11-21 00:05:33.329727] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:43.167 [2024-11-21 00:05:33.329735] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2907dcd9-a39c-45f6-a091-f2f3bc6dbd9e 00:16:43.167 [2024-11-21 00:05:33.329744] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:43.167 [2024-11-21 00:05:33.329751] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:43.167 [2024-11-21 00:05:33.329760] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:43.167 [2024-11-21 00:05:33.329768] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:43.167 [2024-11-21 00:05:33.329778] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:43.167 [2024-11-21 00:05:33.329786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:43.167 [2024-11-21 00:05:33.329798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:43.167 [2024-11-21 00:05:33.329804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:43.167 [2024-11-21 00:05:33.329812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:43.167 [2024-11-21 00:05:33.329819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.167 [2024-11-21 00:05:33.329829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:43.167 [2024-11-21 00:05:33.329838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.107 ms 00:16:43.167 [2024-11-21 00:05:33.329848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.167 [2024-11-21 00:05:33.331746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.167 [2024-11-21 00:05:33.331854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:43.167 [2024-11-21 00:05:33.331869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.860 ms 00:16:43.167 [2024-11-21 00:05:33.331883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.167 [2024-11-21 00:05:33.332016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.167 [2024-11-21 00:05:33.332029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:43.167 [2024-11-21 00:05:33.332039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:43.167 [2024-11-21 00:05:33.332050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.167 [2024-11-21 00:05:33.338631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.167 [2024-11-21 00:05:33.338756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.167 [2024-11-21 00:05:33.338770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.167 [2024-11-21 00:05:33.338794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.167 [2024-11-21 00:05:33.338884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.167 [2024-11-21 00:05:33.338895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.167 [2024-11-21 00:05:33.338913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.168 [2024-11-21 00:05:33.338924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.168 [2024-11-21 00:05:33.338982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.168 [2024-11-21 00:05:33.338994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.168 [2024-11-21 00:05:33.339002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.168 [2024-11-21 00:05:33.339011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.168 [2024-11-21 00:05:33.339044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.168 [2024-11-21 00:05:33.339053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.168 [2024-11-21 00:05:33.339061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.168 [2024-11-21 00:05:33.339071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.168 [2024-11-21 00:05:33.351404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.168 [2024-11-21 00:05:33.351450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.168 [2024-11-21 00:05:33.351460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.168 [2024-11-21 00:05:33.351472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.168 [2024-11-21 00:05:33.361474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.168 [2024-11-21 00:05:33.361527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.168 [2024-11-21 00:05:33.361547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.168 [2024-11-21 00:05:33.361560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.168 [2024-11-21 00:05:33.361642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.168 [2024-11-21 00:05:33.361653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:43.168 [2024-11-21 00:05:33.361662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.168 [2024-11-21 00:05:33.361672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.168 [2024-11-21 00:05:33.361732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.168 [2024-11-21 00:05:33.361744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:43.168 [2024-11-21 00:05:33.361752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.168 [2024-11-21 00:05:33.361761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.168 [2024-11-21 00:05:33.361851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.168 [2024-11-21 00:05:33.361863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:43.168 [2024-11-21 00:05:33.361871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.168 [2024-11-21 00:05:33.361880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.168 [2024-11-21 00:05:33.361935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.168 [2024-11-21 00:05:33.361949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:43.168 [2024-11-21 00:05:33.361967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.168 [2024-11-21 00:05:33.361978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.168 [2024-11-21 00:05:33.362029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.168 [2024-11-21 00:05:33.362040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:43.168 [2024-11-21 00:05:33.362058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.168 [2024-11-21 00:05:33.362067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.168 [2024-11-21 00:05:33.362126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.168 [2024-11-21 00:05:33.362140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:43.168 [2024-11-21 00:05:33.362149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.168 [2024-11-21 00:05:33.362160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.168 [2024-11-21 00:05:33.362378] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.172 ms, result 0 00:16:43.168 true 00:16:43.168 00:05:33 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85023 00:16:43.168 00:05:33 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85023 ']' 00:16:43.168 00:05:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85023 00:16:43.168 00:05:33 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:43.168 00:05:33 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.168 00:05:33 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85023 00:16:43.168 killing process with pid 85023 00:16:43.168 00:05:33 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:43.168 00:05:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:43.168 00:05:33 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85023' 00:16:43.168 00:05:33 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85023 00:16:43.168 00:05:33 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85023 00:16:48.452 00:05:38 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:49.020 65536+0 records in 00:16:49.021 65536+0 records out 00:16:49.021 268435456 bytes (268 MB, 256 MiB) copied, 0.811713 s, 331 MB/s 00:16:49.021 00:05:39 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:49.021 [2024-11-21 00:05:39.225869] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:49.021 [2024-11-21 00:05:39.225956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85182 ] 00:16:49.021 [2024-11-21 00:05:39.355655] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.021 [2024-11-21 00:05:39.396391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.281 [2024-11-21 00:05:39.495030] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:49.281 [2024-11-21 00:05:39.495246] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:49.281 [2024-11-21 00:05:39.647270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.281 [2024-11-21 00:05:39.647320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:49.281 [2024-11-21 00:05:39.647333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:49.281 [2024-11-21 00:05:39.647339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.281 [2024-11-21 00:05:39.649165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.281 [2024-11-21 00:05:39.649206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:49.281 [2024-11-21 00:05:39.649217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:16:49.281 [2024-11-21 00:05:39.649222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.281 [2024-11-21 00:05:39.649279] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:49.281 [2024-11-21 00:05:39.649677] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:49.281 [2024-11-21 00:05:39.649704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.281 [2024-11-21 00:05:39.649712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:49.281 [2024-11-21 00:05:39.649721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:16:49.281 [2024-11-21 00:05:39.649727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.281 [2024-11-21 00:05:39.651080] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:49.281 [2024-11-21 00:05:39.653890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.281 [2024-11-21 00:05:39.654025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:49.281 [2024-11-21 00:05:39.654042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.812 ms 00:16:49.281 [2024-11-21 00:05:39.654049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.281 [2024-11-21 00:05:39.654097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.281 [2024-11-21 00:05:39.654105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:49.281 [2024-11-21 00:05:39.654112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:49.281 [2024-11-21 00:05:39.654117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.281 [2024-11-21 00:05:39.660265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.281 [2024-11-21 00:05:39.660289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:49.281 [2024-11-21 00:05:39.660312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.117 ms 00:16:49.281 [2024-11-21 00:05:39.660318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.281 [2024-11-21 00:05:39.660409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.281 [2024-11-21 00:05:39.660422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:49.281 [2024-11-21 00:05:39.660429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:49.281 [2024-11-21 00:05:39.660436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.281 [2024-11-21 00:05:39.660457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.281 [2024-11-21 00:05:39.660468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:49.281 [2024-11-21 00:05:39.660478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:49.281 [2024-11-21 00:05:39.660483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.281 [2024-11-21 00:05:39.660499] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:49.281 [2024-11-21 00:05:39.662029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.281 [2024-11-21 00:05:39.662052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:49.281 [2024-11-21 00:05:39.662062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.534 ms 00:16:49.281 [2024-11-21 00:05:39.662068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.281 [2024-11-21 00:05:39.662106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.281 [2024-11-21 00:05:39.662115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:49.281 [2024-11-21 00:05:39.662123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:49.281 [2024-11-21 00:05:39.662129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.281 [2024-11-21 00:05:39.662143] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:49.281 [2024-11-21 00:05:39.662160] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:49.281 [2024-11-21 00:05:39.662193] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:49.281 [2024-11-21 00:05:39.662206] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:49.281 [2024-11-21 00:05:39.662290] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:49.281 [2024-11-21 00:05:39.662308] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:49.281 [2024-11-21 00:05:39.662320] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:49.281 [2024-11-21 00:05:39.662328] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:49.282 [2024-11-21 00:05:39.662334] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:49.282 [2024-11-21 00:05:39.662343] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:49.282 [2024-11-21 00:05:39.662349] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:49.282 [2024-11-21 00:05:39.662355] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:49.282 [2024-11-21 00:05:39.662361] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:49.282 [2024-11-21 00:05:39.662367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.282 [2024-11-21 00:05:39.662374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:49.282 [2024-11-21 00:05:39.662384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:16:49.282 [2024-11-21 00:05:39.662394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.282 [2024-11-21 00:05:39.662460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.282 [2024-11-21 00:05:39.662467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:49.282 [2024-11-21 00:05:39.662473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:49.282 [2024-11-21 00:05:39.662479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.282 [2024-11-21 00:05:39.662552] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:49.282 [2024-11-21 00:05:39.662560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:49.282 [2024-11-21 00:05:39.662567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:49.282 [2024-11-21 00:05:39.662575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.282 [2024-11-21 00:05:39.662581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:49.282 [2024-11-21 00:05:39.662586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:49.282 [2024-11-21 00:05:39.662592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:49.282 [2024-11-21 00:05:39.662599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:49.282 [2024-11-21 00:05:39.662604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:49.282 [2024-11-21 00:05:39.662611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:49.282 [2024-11-21 00:05:39.662617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:49.282 [2024-11-21 00:05:39.662623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:49.282 [2024-11-21 00:05:39.662630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:49.282 [2024-11-21 00:05:39.662636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:49.282 [2024-11-21 00:05:39.662642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:49.282 [2024-11-21 00:05:39.662647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.282 [2024-11-21 00:05:39.662652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:49.282 [2024-11-21 00:05:39.662658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:49.282 [2024-11-21 00:05:39.662663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.282 [2024-11-21 00:05:39.662669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:49.282 [2024-11-21 00:05:39.662675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:49.282 [2024-11-21 00:05:39.662681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:49.282 [2024-11-21 00:05:39.662687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:49.282 [2024-11-21 00:05:39.662693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:49.282 [2024-11-21 00:05:39.662699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:49.282 [2024-11-21 00:05:39.662708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:49.282 [2024-11-21 00:05:39.662715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:49.282 [2024-11-21 00:05:39.662722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:49.282 [2024-11-21 00:05:39.662727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:49.282 [2024-11-21 00:05:39.662733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:49.282 [2024-11-21 00:05:39.662739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:49.282 [2024-11-21 00:05:39.662745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:49.282 [2024-11-21 00:05:39.662751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:49.282 [2024-11-21 00:05:39.662757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:49.282 [2024-11-21 00:05:39.662763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:49.282 [2024-11-21 00:05:39.662769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:49.282 [2024-11-21 00:05:39.662775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:49.282 [2024-11-21 00:05:39.662780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:49.282 [2024-11-21 00:05:39.662786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:49.282 [2024-11-21 00:05:39.662793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.282 [2024-11-21 00:05:39.662799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:49.282 [2024-11-21 00:05:39.662806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:49.282 [2024-11-21 00:05:39.662813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.282 [2024-11-21 00:05:39.662818] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:49.282 [2024-11-21 00:05:39.662826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:49.282 [2024-11-21 00:05:39.662837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:49.282 [2024-11-21 00:05:39.662844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.282 [2024-11-21 00:05:39.662851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:49.282 [2024-11-21 00:05:39.662857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:49.282 [2024-11-21 00:05:39.662864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:49.282 [2024-11-21 00:05:39.662870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:49.282 [2024-11-21 00:05:39.662876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:49.282 [2024-11-21 00:05:39.662882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:49.282 [2024-11-21 00:05:39.662890] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:49.282 [2024-11-21 00:05:39.662898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:49.282 [2024-11-21 00:05:39.662909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:49.282 [2024-11-21 00:05:39.662917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:49.282 [2024-11-21 00:05:39.662924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:49.282 [2024-11-21 00:05:39.662931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:49.282 [2024-11-21 00:05:39.662937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:49.282 [2024-11-21 00:05:39.662943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:49.282 [2024-11-21 00:05:39.662950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:49.282 [2024-11-21 00:05:39.662961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:49.282 [2024-11-21 00:05:39.662967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:49.282 [2024-11-21 00:05:39.662973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:49.282 [2024-11-21 00:05:39.662979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:49.282 [2024-11-21 00:05:39.662985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:49.282 [2024-11-21 00:05:39.662991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:49.282 [2024-11-21 00:05:39.662998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:49.282 [2024-11-21 00:05:39.663004] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:49.282 [2024-11-21 00:05:39.663011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:49.282 [2024-11-21 00:05:39.663019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:49.282 [2024-11-21 00:05:39.663025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:49.282 [2024-11-21 00:05:39.663033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:49.283 [2024-11-21 00:05:39.663040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:49.283 [2024-11-21 00:05:39.663047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.283 [2024-11-21 00:05:39.663054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:49.283 [2024-11-21 00:05:39.663062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:16:49.283 [2024-11-21 00:05:39.663067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.283 [2024-11-21 00:05:39.685247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.283 [2024-11-21 00:05:39.685342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:49.283 [2024-11-21 00:05:39.685373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.138 ms 00:16:49.283 [2024-11-21 00:05:39.685395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.283 [2024-11-21 00:05:39.685656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.283 [2024-11-21 00:05:39.685689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:49.283 [2024-11-21 00:05:39.685708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:16:49.283 [2024-11-21 00:05:39.685729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.283 [2024-11-21 00:05:39.696450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.283 [2024-11-21 00:05:39.696476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:49.283 [2024-11-21 00:05:39.696485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.680 ms 00:16:49.283 [2024-11-21 00:05:39.696491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.283 [2024-11-21 00:05:39.696537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.283 [2024-11-21 00:05:39.696551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.283 [2024-11-21 00:05:39.696560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:49.283 [2024-11-21 00:05:39.696566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.283 [2024-11-21 00:05:39.696949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.283 [2024-11-21 00:05:39.696962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.283 [2024-11-21 00:05:39.696970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:16:49.283 [2024-11-21 00:05:39.696976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.283 [2024-11-21 00:05:39.697092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.283 [2024-11-21 00:05:39.697101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.283 [2024-11-21 00:05:39.697109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:16:49.283 [2024-11-21 00:05:39.697118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.542 [2024-11-21 00:05:39.702905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.542 [2024-11-21 00:05:39.702931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.542 [2024-11-21 00:05:39.702939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.767 ms 00:16:49.542 [2024-11-21 00:05:39.702945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.542 [2024-11-21 00:05:39.705900] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:49.542 [2024-11-21 00:05:39.705929] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:49.542 [2024-11-21 00:05:39.705944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.542 [2024-11-21 00:05:39.705950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:49.542 [2024-11-21 00:05:39.705957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.927 ms 00:16:49.542 [2024-11-21 00:05:39.705962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.542 [2024-11-21 00:05:39.717667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.542 [2024-11-21 00:05:39.717700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:49.542 [2024-11-21 00:05:39.717710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.667 ms 00:16:49.542 [2024-11-21 00:05:39.717717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.542 [2024-11-21 00:05:39.719584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.542 [2024-11-21 00:05:39.719610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:49.542 [2024-11-21 00:05:39.719617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.804 ms 00:16:49.542 [2024-11-21 00:05:39.719623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.542 [2024-11-21 00:05:39.721121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.542 [2024-11-21 00:05:39.721147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:49.542 [2024-11-21 00:05:39.721159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.465 ms 00:16:49.542 [2024-11-21 00:05:39.721165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.542 [2024-11-21 00:05:39.721442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.542 [2024-11-21 00:05:39.721453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:49.542 [2024-11-21 00:05:39.721465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:16:49.542 [2024-11-21 00:05:39.721471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.542 [2024-11-21 00:05:39.740006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.542 [2024-11-21 00:05:39.740041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:49.542 [2024-11-21 00:05:39.740050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.516 ms 00:16:49.542 [2024-11-21 00:05:39.740056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.542 [2024-11-21 00:05:39.746443] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:49.543 [2024-11-21 00:05:39.761452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.543 [2024-11-21 00:05:39.761481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:49.543 [2024-11-21 00:05:39.761491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.326 ms 00:16:49.543 [2024-11-21 00:05:39.761498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.543 [2024-11-21 00:05:39.761571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.543 [2024-11-21 00:05:39.761582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:49.543 [2024-11-21 00:05:39.761589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:49.543 [2024-11-21 00:05:39.761596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.543 [2024-11-21 00:05:39.761647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.543 [2024-11-21 00:05:39.761657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:49.543 [2024-11-21 00:05:39.761664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:49.543 [2024-11-21 00:05:39.761670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.543 [2024-11-21 00:05:39.761693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.543 [2024-11-21 00:05:39.761700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:49.543 [2024-11-21 00:05:39.761707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:49.543 [2024-11-21 00:05:39.761712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.543 [2024-11-21 00:05:39.761738] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:49.543 [2024-11-21 00:05:39.761746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.543 [2024-11-21 00:05:39.761753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:49.543 [2024-11-21 00:05:39.761765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:49.543 [2024-11-21 00:05:39.761771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.543 [2024-11-21 00:05:39.765747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.543 [2024-11-21 00:05:39.765775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:49.543 [2024-11-21 00:05:39.765784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.962 ms 00:16:49.543 [2024-11-21 00:05:39.765791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.543 [2024-11-21 00:05:39.765866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.543 [2024-11-21 00:05:39.765875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:49.543 [2024-11-21 00:05:39.765882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:49.543 [2024-11-21 00:05:39.765890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.543 [2024-11-21 00:05:39.766661] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:49.543 [2024-11-21 00:05:39.767484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 119.140 ms, result 0 00:16:49.543 [2024-11-21 00:05:39.768391] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:49.543 [2024-11-21 00:05:39.778159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:50.478  [2024-11-21T00:05:41.919Z] Copying: 23/256 [MB] (23 MBps) [2024-11-21T00:05:42.863Z] Copying: 51/256 [MB] (27 MBps) [2024-11-21T00:05:43.806Z] Copying: 71/256 [MB] (20 MBps) [2024-11-21T00:05:45.190Z] Copying: 90/256 [MB] (18 MBps) [2024-11-21T00:05:46.132Z] Copying: 106/256 [MB] (15 MBps) [2024-11-21T00:05:47.073Z] Copying: 127/256 [MB] (21 MBps) [2024-11-21T00:05:48.008Z] Copying: 139/256 [MB] (12 MBps) [2024-11-21T00:05:48.943Z] Copying: 150/256 [MB] (11 MBps) [2024-11-21T00:05:49.879Z] Copying: 162/256 [MB] (11 MBps) [2024-11-21T00:05:50.820Z] Copying: 174/256 [MB] (11 MBps) [2024-11-21T00:05:52.196Z] Copying: 184/256 [MB] (10 MBps) [2024-11-21T00:05:53.131Z] Copying: 195/256 [MB] (10 MBps) [2024-11-21T00:05:54.065Z] Copying: 206/256 [MB] (11 MBps) [2024-11-21T00:05:55.000Z] Copying: 220/256 [MB] (13 MBps) [2024-11-21T00:05:55.938Z] Copying: 231/256 [MB] (11 MBps) [2024-11-21T00:05:56.883Z] Copying: 242/256 [MB] (11 MBps) [2024-11-21T00:05:57.147Z] Copying: 253/256 [MB] (10 MBps) [2024-11-21T00:05:57.147Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-21 00:05:57.084031] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:06.726 [2024-11-21 00:05:57.086454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.726 [2024-11-21 00:05:57.086513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:06.726 [2024-11-21 00:05:57.086530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:06.726 [2024-11-21 00:05:57.086540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.726 [2024-11-21 00:05:57.086568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:06.726 [2024-11-21 00:05:57.087518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.726 [2024-11-21 00:05:57.087566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:06.726 [2024-11-21 00:05:57.087587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:17:06.726 [2024-11-21 00:05:57.087597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.726 [2024-11-21 00:05:57.090479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.726 [2024-11-21 00:05:57.090526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:06.726 [2024-11-21 00:05:57.090537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.853 ms 00:17:06.726 [2024-11-21 00:05:57.090546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.726 [2024-11-21 00:05:57.098838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.726 [2024-11-21 00:05:57.098896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:06.726 [2024-11-21 00:05:57.098908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.272 ms 00:17:06.726 [2024-11-21 00:05:57.098918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.726 [2024-11-21 00:05:57.105848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.726 [2024-11-21 00:05:57.106054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:06.726 [2024-11-21 00:05:57.106075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.872 ms 00:17:06.726 [2024-11-21 00:05:57.106084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.726 [2024-11-21 00:05:57.109374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.726 [2024-11-21 00:05:57.109421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:06.726 [2024-11-21 00:05:57.109431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.240 ms 00:17:06.726 [2024-11-21 00:05:57.109440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.726 [2024-11-21 00:05:57.115255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.726 [2024-11-21 00:05:57.115333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:06.726 [2024-11-21 00:05:57.115346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.754 ms 00:17:06.726 [2024-11-21 00:05:57.115360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.726 [2024-11-21 00:05:57.115501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.726 [2024-11-21 00:05:57.115514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:06.726 [2024-11-21 00:05:57.115525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:06.726 [2024-11-21 00:05:57.115535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.726 [2024-11-21 00:05:57.119292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.726 [2024-11-21 00:05:57.119349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:06.726 [2024-11-21 00:05:57.119359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.730 ms 00:17:06.726 [2024-11-21 00:05:57.119367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.726 [2024-11-21 00:05:57.122314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.726 [2024-11-21 00:05:57.122358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:06.726 [2024-11-21 00:05:57.122367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.901 ms 00:17:06.726 [2024-11-21 00:05:57.122375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.726 [2024-11-21 00:05:57.124637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.726 [2024-11-21 00:05:57.124839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:06.726 [2024-11-21 00:05:57.124857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.215 ms 00:17:06.726 [2024-11-21 00:05:57.124865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.726 [2024-11-21 00:05:57.127256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.726 [2024-11-21 00:05:57.127323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:06.726 [2024-11-21 00:05:57.127333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.259 ms 00:17:06.726 [2024-11-21 00:05:57.127342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.726 [2024-11-21 00:05:57.127384] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:06.726 [2024-11-21 00:05:57.127401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:06.726 [2024-11-21 00:05:57.127675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.127996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:06.727 [2024-11-21 00:05:57.128224] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:06.727 [2024-11-21 00:05:57.128233] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2907dcd9-a39c-45f6-a091-f2f3bc6dbd9e 00:17:06.727 [2024-11-21 00:05:57.128241] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:06.727 [2024-11-21 00:05:57.128249] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:06.727 [2024-11-21 00:05:57.128258] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:06.727 [2024-11-21 00:05:57.128267] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:06.727 [2024-11-21 00:05:57.128274] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:06.727 [2024-11-21 00:05:57.128283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:06.727 [2024-11-21 00:05:57.128291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:06.727 [2024-11-21 00:05:57.128592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:06.727 [2024-11-21 00:05:57.128616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:06.727 [2024-11-21 00:05:57.128635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.727 [2024-11-21 00:05:57.128659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:06.727 [2024-11-21 00:05:57.128680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:17:06.727 [2024-11-21 00:05:57.128709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.727 [2024-11-21 00:05:57.131812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.727 [2024-11-21 00:05:57.131968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:06.727 [2024-11-21 00:05:57.132032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.066 ms 00:17:06.727 [2024-11-21 00:05:57.132059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.727 [2024-11-21 00:05:57.132236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.727 [2024-11-21 00:05:57.132394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:06.728 [2024-11-21 00:05:57.132431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:17:06.728 [2024-11-21 00:05:57.132453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.728 [2024-11-21 00:05:57.141989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.728 [2024-11-21 00:05:57.142151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:06.728 [2024-11-21 00:05:57.142206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.728 [2024-11-21 00:05:57.142229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.728 [2024-11-21 00:05:57.142406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.728 [2024-11-21 00:05:57.142437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:06.728 [2024-11-21 00:05:57.142461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.728 [2024-11-21 00:05:57.142488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.728 [2024-11-21 00:05:57.142556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.728 [2024-11-21 00:05:57.142582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:06.728 [2024-11-21 00:05:57.142604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.728 [2024-11-21 00:05:57.142703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.728 [2024-11-21 00:05:57.142742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.728 [2024-11-21 00:05:57.142772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:06.989 [2024-11-21 00:05:57.142837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.989 [2024-11-21 00:05:57.142859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.989 [2024-11-21 00:05:57.162467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.989 [2024-11-21 00:05:57.162524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:06.989 [2024-11-21 00:05:57.162538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.989 [2024-11-21 00:05:57.162556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.989 [2024-11-21 00:05:57.178019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.989 [2024-11-21 00:05:57.178269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:06.989 [2024-11-21 00:05:57.178317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.989 [2024-11-21 00:05:57.178328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.989 [2024-11-21 00:05:57.178397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.989 [2024-11-21 00:05:57.178408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:06.989 [2024-11-21 00:05:57.178418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.989 [2024-11-21 00:05:57.178429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.989 [2024-11-21 00:05:57.178464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.989 [2024-11-21 00:05:57.178477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:06.989 [2024-11-21 00:05:57.178486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.989 [2024-11-21 00:05:57.178495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.989 [2024-11-21 00:05:57.178589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.989 [2024-11-21 00:05:57.178602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:06.989 [2024-11-21 00:05:57.178611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.989 [2024-11-21 00:05:57.178621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.989 [2024-11-21 00:05:57.178659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.989 [2024-11-21 00:05:57.178671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:06.989 [2024-11-21 00:05:57.178680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.989 [2024-11-21 00:05:57.178689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.989 [2024-11-21 00:05:57.178752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.989 [2024-11-21 00:05:57.178773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:06.989 [2024-11-21 00:05:57.178783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.989 [2024-11-21 00:05:57.178795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.989 [2024-11-21 00:05:57.178866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.989 [2024-11-21 00:05:57.178879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:06.989 [2024-11-21 00:05:57.178889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.989 [2024-11-21 00:05:57.178899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.989 [2024-11-21 00:05:57.179089] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 92.608 ms, result 0 00:17:07.565 00:17:07.565 00:17:07.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.565 00:05:57 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85379 00:17:07.565 00:05:57 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85379 00:17:07.565 00:05:57 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85379 ']' 00:17:07.565 00:05:57 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:07.565 00:05:57 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.565 00:05:57 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:07.565 00:05:57 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.565 00:05:57 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:07.565 00:05:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:07.565 [2024-11-21 00:05:57.880383] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:07.565 [2024-11-21 00:05:57.880521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85379 ] 00:17:07.825 [2024-11-21 00:05:58.015829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.826 [2024-11-21 00:05:58.087316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.399 00:05:58 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:08.399 00:05:58 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:08.399 00:05:58 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:08.660 [2024-11-21 00:05:58.950013] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:08.660 [2024-11-21 00:05:58.950100] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:08.923 [2024-11-21 00:05:59.129874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.923 [2024-11-21 00:05:59.129938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:08.923 [2024-11-21 00:05:59.129955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:08.923 [2024-11-21 00:05:59.129966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.923 [2024-11-21 00:05:59.132669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.923 [2024-11-21 00:05:59.132911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:08.923 [2024-11-21 00:05:59.132937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.681 ms 00:17:08.923 [2024-11-21 00:05:59.132948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.923 [2024-11-21 00:05:59.133747] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:08.923 [2024-11-21 00:05:59.134675] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:08.923 [2024-11-21 00:05:59.134768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.923 [2024-11-21 00:05:59.134813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:08.923 [2024-11-21 00:05:59.134857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:17:08.923 [2024-11-21 00:05:59.134896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.923 [2024-11-21 00:05:59.138027] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:08.923 [2024-11-21 00:05:59.144536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.923 [2024-11-21 00:05:59.144633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:08.923 [2024-11-21 00:05:59.144669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.503 ms 00:17:08.923 [2024-11-21 00:05:59.144693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.923 [2024-11-21 00:05:59.144916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.923 [2024-11-21 00:05:59.144951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:08.923 [2024-11-21 00:05:59.144991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:08.924 [2024-11-21 00:05:59.145015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.924 [2024-11-21 00:05:59.156892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.924 [2024-11-21 00:05:59.156935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:08.924 [2024-11-21 00:05:59.156950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.711 ms 00:17:08.924 [2024-11-21 00:05:59.156959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.924 [2024-11-21 00:05:59.157105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.924 [2024-11-21 00:05:59.157122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:08.924 [2024-11-21 00:05:59.157137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:08.924 [2024-11-21 00:05:59.157145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.924 [2024-11-21 00:05:59.157179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.924 [2024-11-21 00:05:59.157189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:08.924 [2024-11-21 00:05:59.157227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:08.924 [2024-11-21 00:05:59.157239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.924 [2024-11-21 00:05:59.157268] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:08.924 [2024-11-21 00:05:59.159950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.924 [2024-11-21 00:05:59.160174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:08.924 [2024-11-21 00:05:59.160194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.692 ms 00:17:08.924 [2024-11-21 00:05:59.160205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.924 [2024-11-21 00:05:59.160258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.924 [2024-11-21 00:05:59.160274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:08.924 [2024-11-21 00:05:59.160284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:08.924 [2024-11-21 00:05:59.160316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.924 [2024-11-21 00:05:59.160340] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:08.924 [2024-11-21 00:05:59.160368] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:08.924 [2024-11-21 00:05:59.160413] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:08.924 [2024-11-21 00:05:59.160436] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:08.924 [2024-11-21 00:05:59.160548] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:08.924 [2024-11-21 00:05:59.160565] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:08.924 [2024-11-21 00:05:59.160580] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:08.924 [2024-11-21 00:05:59.160602] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:08.924 [2024-11-21 00:05:59.160612] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:08.924 [2024-11-21 00:05:59.160627] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:08.924 [2024-11-21 00:05:59.160635] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:08.924 [2024-11-21 00:05:59.160646] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:08.924 [2024-11-21 00:05:59.160654] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:08.924 [2024-11-21 00:05:59.160666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.924 [2024-11-21 00:05:59.160677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:08.924 [2024-11-21 00:05:59.160690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:17:08.924 [2024-11-21 00:05:59.160701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.924 [2024-11-21 00:05:59.160795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.924 [2024-11-21 00:05:59.160806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:08.924 [2024-11-21 00:05:59.160818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:08.924 [2024-11-21 00:05:59.160827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.924 [2024-11-21 00:05:59.160936] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:08.924 [2024-11-21 00:05:59.160952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:08.924 [2024-11-21 00:05:59.160968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:08.924 [2024-11-21 00:05:59.160978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.924 [2024-11-21 00:05:59.160992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:08.924 [2024-11-21 00:05:59.161000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:08.924 [2024-11-21 00:05:59.161010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:08.924 [2024-11-21 00:05:59.161020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:08.924 [2024-11-21 00:05:59.161033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:08.924 [2024-11-21 00:05:59.161042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:08.924 [2024-11-21 00:05:59.161053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:08.924 [2024-11-21 00:05:59.161060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:08.924 [2024-11-21 00:05:59.161069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:08.924 [2024-11-21 00:05:59.161077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:08.924 [2024-11-21 00:05:59.161088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:08.924 [2024-11-21 00:05:59.161096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.924 [2024-11-21 00:05:59.161106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:08.924 [2024-11-21 00:05:59.161113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:08.924 [2024-11-21 00:05:59.161122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.924 [2024-11-21 00:05:59.161129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:08.924 [2024-11-21 00:05:59.161140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:08.924 [2024-11-21 00:05:59.161147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:08.924 [2024-11-21 00:05:59.161157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:08.924 [2024-11-21 00:05:59.161164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:08.924 [2024-11-21 00:05:59.161173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:08.924 [2024-11-21 00:05:59.161180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:08.924 [2024-11-21 00:05:59.161189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:08.924 [2024-11-21 00:05:59.161196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:08.924 [2024-11-21 00:05:59.161220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:08.924 [2024-11-21 00:05:59.161226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:08.924 [2024-11-21 00:05:59.161235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:08.924 [2024-11-21 00:05:59.161242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:08.924 [2024-11-21 00:05:59.161253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:08.924 [2024-11-21 00:05:59.161260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:08.924 [2024-11-21 00:05:59.161269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:08.924 [2024-11-21 00:05:59.161276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:08.924 [2024-11-21 00:05:59.161287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:08.924 [2024-11-21 00:05:59.161335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:08.924 [2024-11-21 00:05:59.161347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:08.924 [2024-11-21 00:05:59.161354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.925 [2024-11-21 00:05:59.161363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:08.925 [2024-11-21 00:05:59.161372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:08.925 [2024-11-21 00:05:59.161382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.925 [2024-11-21 00:05:59.161390] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:08.925 [2024-11-21 00:05:59.161402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:08.925 [2024-11-21 00:05:59.161412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:08.925 [2024-11-21 00:05:59.161423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.925 [2024-11-21 00:05:59.161432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:08.925 [2024-11-21 00:05:59.161441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:08.925 [2024-11-21 00:05:59.161449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:08.925 [2024-11-21 00:05:59.161458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:08.925 [2024-11-21 00:05:59.161465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:08.925 [2024-11-21 00:05:59.161479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:08.925 [2024-11-21 00:05:59.161489] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:08.925 [2024-11-21 00:05:59.161505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:08.925 [2024-11-21 00:05:59.161514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:08.925 [2024-11-21 00:05:59.161534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:08.925 [2024-11-21 00:05:59.161541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:08.925 [2024-11-21 00:05:59.161552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:08.925 [2024-11-21 00:05:59.161561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:08.925 [2024-11-21 00:05:59.161570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:08.925 [2024-11-21 00:05:59.161577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:08.925 [2024-11-21 00:05:59.161587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:08.925 [2024-11-21 00:05:59.161594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:08.925 [2024-11-21 00:05:59.161603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:08.925 [2024-11-21 00:05:59.161609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:08.925 [2024-11-21 00:05:59.161620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:08.925 [2024-11-21 00:05:59.161628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:08.925 [2024-11-21 00:05:59.161640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:08.925 [2024-11-21 00:05:59.161654] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:08.925 [2024-11-21 00:05:59.161664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:08.925 [2024-11-21 00:05:59.161672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:08.925 [2024-11-21 00:05:59.161683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:08.925 [2024-11-21 00:05:59.161693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:08.925 [2024-11-21 00:05:59.161703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:08.925 [2024-11-21 00:05:59.161711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.925 [2024-11-21 00:05:59.161724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:08.925 [2024-11-21 00:05:59.161734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:17:08.925 [2024-11-21 00:05:59.161748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.925 [2024-11-21 00:05:59.182218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.925 [2024-11-21 00:05:59.182272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:08.925 [2024-11-21 00:05:59.182286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.382 ms 00:17:08.925 [2024-11-21 00:05:59.182328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.925 [2024-11-21 00:05:59.182487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.925 [2024-11-21 00:05:59.182511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:08.925 [2024-11-21 00:05:59.182524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:08.925 [2024-11-21 00:05:59.182534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.925 [2024-11-21 00:05:59.199235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.925 [2024-11-21 00:05:59.199290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:08.925 [2024-11-21 00:05:59.199321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.673 ms 00:17:08.925 [2024-11-21 00:05:59.199333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.925 [2024-11-21 00:05:59.199409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.925 [2024-11-21 00:05:59.199425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:08.925 [2024-11-21 00:05:59.199435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:08.925 [2024-11-21 00:05:59.199446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.925 [2024-11-21 00:05:59.200129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.925 [2024-11-21 00:05:59.200181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:08.925 [2024-11-21 00:05:59.200193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:17:08.925 [2024-11-21 00:05:59.200205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.925 [2024-11-21 00:05:59.200394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.925 [2024-11-21 00:05:59.200413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:08.925 [2024-11-21 00:05:59.200425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:17:08.925 [2024-11-21 00:05:59.200436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.925 [2024-11-21 00:05:59.222224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.925 [2024-11-21 00:05:59.222490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:08.925 [2024-11-21 00:05:59.222520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.757 ms 00:17:08.925 [2024-11-21 00:05:59.222533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.925 [2024-11-21 00:05:59.227694] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:08.925 [2024-11-21 00:05:59.227756] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:08.925 [2024-11-21 00:05:59.227773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.925 [2024-11-21 00:05:59.227787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:08.925 [2024-11-21 00:05:59.227799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.026 ms 00:17:08.925 [2024-11-21 00:05:59.227810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.925 [2024-11-21 00:05:59.244631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.925 [2024-11-21 00:05:59.244687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:08.925 [2024-11-21 00:05:59.244701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.728 ms 00:17:08.925 [2024-11-21 00:05:59.244716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.925 [2024-11-21 00:05:59.247695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.925 [2024-11-21 00:05:59.247752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:08.925 [2024-11-21 00:05:59.247763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:17:08.925 [2024-11-21 00:05:59.247773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.925 [2024-11-21 00:05:59.250907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.925 [2024-11-21 00:05:59.250975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:08.926 [2024-11-21 00:05:59.250988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.075 ms 00:17:08.926 [2024-11-21 00:05:59.250998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.926 [2024-11-21 00:05:59.251438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.926 [2024-11-21 00:05:59.251469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:08.926 [2024-11-21 00:05:59.251481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:17:08.926 [2024-11-21 00:05:59.251493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.926 [2024-11-21 00:05:59.283990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.926 [2024-11-21 00:05:59.284055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:08.926 [2024-11-21 00:05:59.284070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.472 ms 00:17:08.926 [2024-11-21 00:05:59.284085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.926 [2024-11-21 00:05:59.293272] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:08.926 [2024-11-21 00:05:59.318273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.926 [2024-11-21 00:05:59.318340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:08.926 [2024-11-21 00:05:59.318359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.084 ms 00:17:08.926 [2024-11-21 00:05:59.318369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.926 [2024-11-21 00:05:59.318478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.926 [2024-11-21 00:05:59.318499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:08.926 [2024-11-21 00:05:59.318513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:08.926 [2024-11-21 00:05:59.318525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.926 [2024-11-21 00:05:59.318600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.926 [2024-11-21 00:05:59.318611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:08.926 [2024-11-21 00:05:59.318632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:08.926 [2024-11-21 00:05:59.318640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.926 [2024-11-21 00:05:59.318683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.926 [2024-11-21 00:05:59.318694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:08.926 [2024-11-21 00:05:59.318708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:08.926 [2024-11-21 00:05:59.318716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.926 [2024-11-21 00:05:59.318765] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:08.926 [2024-11-21 00:05:59.318777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.926 [2024-11-21 00:05:59.318788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:08.926 [2024-11-21 00:05:59.318796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:08.926 [2024-11-21 00:05:59.318806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.926 [2024-11-21 00:05:59.325836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.926 [2024-11-21 00:05:59.325894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:08.926 [2024-11-21 00:05:59.325907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.005 ms 00:17:08.926 [2024-11-21 00:05:59.325919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.926 [2024-11-21 00:05:59.326022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.926 [2024-11-21 00:05:59.326041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:08.926 [2024-11-21 00:05:59.326053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:08.926 [2024-11-21 00:05:59.326064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.926 [2024-11-21 00:05:59.327337] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:08.926 [2024-11-21 00:05:59.328771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 197.061 ms, result 0 00:17:08.926 [2024-11-21 00:05:59.331516] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:09.187 Some configs were skipped because the RPC state that can call them passed over. 00:17:09.187 00:05:59 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:09.187 [2024-11-21 00:05:59.568208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.187 [2024-11-21 00:05:59.568438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:09.187 [2024-11-21 00:05:59.568523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.450 ms 00:17:09.187 [2024-11-21 00:05:59.568549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.187 [2024-11-21 00:05:59.568611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.857 ms, result 0 00:17:09.187 true 00:17:09.187 00:05:59 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:09.447 [2024-11-21 00:05:59.799859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.447 [2024-11-21 00:05:59.800054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:09.447 [2024-11-21 00:05:59.800123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.943 ms 00:17:09.447 [2024-11-21 00:05:59.800152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.447 [2024-11-21 00:05:59.800210] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.293 ms, result 0 00:17:09.447 true 00:17:09.447 00:05:59 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85379 00:17:09.447 00:05:59 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85379 ']' 00:17:09.447 00:05:59 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85379 00:17:09.447 00:05:59 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:09.447 00:05:59 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.447 00:05:59 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85379 00:17:09.447 00:05:59 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.447 00:05:59 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.447 00:05:59 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85379' 00:17:09.447 killing process with pid 85379 00:17:09.447 00:05:59 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85379 00:17:09.447 00:05:59 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85379 00:17:09.710 [2024-11-21 00:06:00.042782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.710 [2024-11-21 00:06:00.043092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:09.710 [2024-11-21 00:06:00.043171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:09.710 [2024-11-21 00:06:00.043197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.710 [2024-11-21 00:06:00.043255] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:09.710 [2024-11-21 00:06:00.044167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.710 [2024-11-21 00:06:00.044341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:09.710 [2024-11-21 00:06:00.044405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:17:09.710 [2024-11-21 00:06:00.044433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.710 [2024-11-21 00:06:00.044766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.710 [2024-11-21 00:06:00.044806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:09.710 [2024-11-21 00:06:00.044829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:17:09.710 [2024-11-21 00:06:00.044887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.710 [2024-11-21 00:06:00.049463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.710 [2024-11-21 00:06:00.049606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:09.710 [2024-11-21 00:06:00.049669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.533 ms 00:17:09.710 [2024-11-21 00:06:00.049698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.710 [2024-11-21 00:06:00.057006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.710 [2024-11-21 00:06:00.057233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:09.710 [2024-11-21 00:06:00.057256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.009 ms 00:17:09.710 [2024-11-21 00:06:00.057271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.710 [2024-11-21 00:06:00.060247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.710 [2024-11-21 00:06:00.060449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:09.710 [2024-11-21 00:06:00.060469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.862 ms 00:17:09.710 [2024-11-21 00:06:00.060480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.710 [2024-11-21 00:06:00.066214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.710 [2024-11-21 00:06:00.066286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:09.710 [2024-11-21 00:06:00.066313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.417 ms 00:17:09.710 [2024-11-21 00:06:00.066329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.710 [2024-11-21 00:06:00.066506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.710 [2024-11-21 00:06:00.066530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:09.710 [2024-11-21 00:06:00.066541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:09.710 [2024-11-21 00:06:00.066552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.710 [2024-11-21 00:06:00.070382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.710 [2024-11-21 00:06:00.070438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:09.710 [2024-11-21 00:06:00.070449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.808 ms 00:17:09.710 [2024-11-21 00:06:00.070465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.710 [2024-11-21 00:06:00.073494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.710 [2024-11-21 00:06:00.073549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:09.710 [2024-11-21 00:06:00.073559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.959 ms 00:17:09.710 [2024-11-21 00:06:00.073569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.710 [2024-11-21 00:06:00.075842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.710 [2024-11-21 00:06:00.076047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:09.710 [2024-11-21 00:06:00.076066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.224 ms 00:17:09.710 [2024-11-21 00:06:00.076076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.710 [2024-11-21 00:06:00.078534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.710 [2024-11-21 00:06:00.078583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:09.710 [2024-11-21 00:06:00.078594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.267 ms 00:17:09.710 [2024-11-21 00:06:00.078606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.710 [2024-11-21 00:06:00.078656] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:09.710 [2024-11-21 00:06:00.078676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:09.710 [2024-11-21 00:06:00.078694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:09.710 [2024-11-21 00:06:00.078708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:09.710 [2024-11-21 00:06:00.078718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:09.710 [2024-11-21 00:06:00.078729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:09.710 [2024-11-21 00:06:00.078737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:09.710 [2024-11-21 00:06:00.078748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:09.710 [2024-11-21 00:06:00.078756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:09.710 [2024-11-21 00:06:00.078769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:09.710 [2024-11-21 00:06:00.078778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.078995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:09.711 [2024-11-21 00:06:00.079649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:09.712 [2024-11-21 00:06:00.079657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:09.712 [2024-11-21 00:06:00.079678] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:09.712 [2024-11-21 00:06:00.079689] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2907dcd9-a39c-45f6-a091-f2f3bc6dbd9e 00:17:09.712 [2024-11-21 00:06:00.079699] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:09.712 [2024-11-21 00:06:00.079707] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:09.712 [2024-11-21 00:06:00.079718] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:09.712 [2024-11-21 00:06:00.079730] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:09.712 [2024-11-21 00:06:00.079740] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:09.712 [2024-11-21 00:06:00.079748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:09.712 [2024-11-21 00:06:00.079758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:09.712 [2024-11-21 00:06:00.079765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:09.712 [2024-11-21 00:06:00.079774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:09.712 [2024-11-21 00:06:00.079782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.712 [2024-11-21 00:06:00.079799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:09.712 [2024-11-21 00:06:00.079808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.128 ms 00:17:09.712 [2024-11-21 00:06:00.079825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.712 [2024-11-21 00:06:00.082846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.712 [2024-11-21 00:06:00.082944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:09.712 [2024-11-21 00:06:00.082959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.966 ms 00:17:09.712 [2024-11-21 00:06:00.082974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.712 [2024-11-21 00:06:00.083131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.712 [2024-11-21 00:06:00.083144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:09.712 [2024-11-21 00:06:00.083155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:17:09.712 [2024-11-21 00:06:00.083167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.712 [2024-11-21 00:06:00.094168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.712 [2024-11-21 00:06:00.094450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.712 [2024-11-21 00:06:00.094470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.712 [2024-11-21 00:06:00.094482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.712 [2024-11-21 00:06:00.094584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.712 [2024-11-21 00:06:00.094603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.712 [2024-11-21 00:06:00.094613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.712 [2024-11-21 00:06:00.094628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.712 [2024-11-21 00:06:00.094678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.712 [2024-11-21 00:06:00.094691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.712 [2024-11-21 00:06:00.094705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.712 [2024-11-21 00:06:00.094716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.712 [2024-11-21 00:06:00.094737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.712 [2024-11-21 00:06:00.094750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.712 [2024-11-21 00:06:00.094759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.712 [2024-11-21 00:06:00.094770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.712 [2024-11-21 00:06:00.114985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.712 [2024-11-21 00:06:00.115059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.712 [2024-11-21 00:06:00.115071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.712 [2024-11-21 00:06:00.115082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.972 [2024-11-21 00:06:00.131136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.972 [2024-11-21 00:06:00.131206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.972 [2024-11-21 00:06:00.131220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.972 [2024-11-21 00:06:00.131236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.972 [2024-11-21 00:06:00.131367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.972 [2024-11-21 00:06:00.131395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:09.972 [2024-11-21 00:06:00.131405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.972 [2024-11-21 00:06:00.131421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.972 [2024-11-21 00:06:00.131460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.972 [2024-11-21 00:06:00.131475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:09.972 [2024-11-21 00:06:00.131484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.972 [2024-11-21 00:06:00.131496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.972 [2024-11-21 00:06:00.131586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.972 [2024-11-21 00:06:00.131601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:09.972 [2024-11-21 00:06:00.131612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.972 [2024-11-21 00:06:00.131625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.972 [2024-11-21 00:06:00.131670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.972 [2024-11-21 00:06:00.131684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:09.972 [2024-11-21 00:06:00.131692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.972 [2024-11-21 00:06:00.131705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.972 [2024-11-21 00:06:00.131760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.972 [2024-11-21 00:06:00.131775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:09.972 [2024-11-21 00:06:00.131785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.972 [2024-11-21 00:06:00.131798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.972 [2024-11-21 00:06:00.131865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.972 [2024-11-21 00:06:00.131881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:09.972 [2024-11-21 00:06:00.131895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.972 [2024-11-21 00:06:00.131908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.972 [2024-11-21 00:06:00.132089] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 89.272 ms, result 0 00:17:10.231 00:06:00 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:10.231 00:06:00 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:10.231 [2024-11-21 00:06:00.519436] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:10.231 [2024-11-21 00:06:00.519564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85420 ] 00:17:10.489 [2024-11-21 00:06:00.653342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.489 [2024-11-21 00:06:00.696390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.489 [2024-11-21 00:06:00.795935] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:10.489 [2024-11-21 00:06:00.795992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:10.749 [2024-11-21 00:06:00.946431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.749 [2024-11-21 00:06:00.946471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:10.749 [2024-11-21 00:06:00.946482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.749 [2024-11-21 00:06:00.946489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.749 [2024-11-21 00:06:00.948350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.749 [2024-11-21 00:06:00.948383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:10.749 [2024-11-21 00:06:00.948392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.848 ms 00:17:10.749 [2024-11-21 00:06:00.948403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.749 [2024-11-21 00:06:00.948460] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:10.749 [2024-11-21 00:06:00.948651] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:10.749 [2024-11-21 00:06:00.948665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.749 [2024-11-21 00:06:00.948671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:10.749 [2024-11-21 00:06:00.948679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:17:10.749 [2024-11-21 00:06:00.948685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.749 [2024-11-21 00:06:00.950001] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:10.749 [2024-11-21 00:06:00.952679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.749 [2024-11-21 00:06:00.952706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:10.749 [2024-11-21 00:06:00.952718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.678 ms 00:17:10.749 [2024-11-21 00:06:00.952724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.749 [2024-11-21 00:06:00.952776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.749 [2024-11-21 00:06:00.952784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:10.749 [2024-11-21 00:06:00.952791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:10.749 [2024-11-21 00:06:00.952796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.749 [2024-11-21 00:06:00.959101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.749 [2024-11-21 00:06:00.959191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:10.749 [2024-11-21 00:06:00.959239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.275 ms 00:17:10.749 [2024-11-21 00:06:00.959257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.749 [2024-11-21 00:06:00.959374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.749 [2024-11-21 00:06:00.959492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:10.749 [2024-11-21 00:06:00.959513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:10.749 [2024-11-21 00:06:00.959528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.749 [2024-11-21 00:06:00.959559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.749 [2024-11-21 00:06:00.959576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:10.749 [2024-11-21 00:06:00.959595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:10.749 [2024-11-21 00:06:00.959609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.749 [2024-11-21 00:06:00.959636] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:10.749 [2024-11-21 00:06:00.961192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.749 [2024-11-21 00:06:00.961321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:10.749 [2024-11-21 00:06:00.961333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:17:10.749 [2024-11-21 00:06:00.961339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.749 [2024-11-21 00:06:00.961372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.749 [2024-11-21 00:06:00.961382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:10.749 [2024-11-21 00:06:00.961389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:10.749 [2024-11-21 00:06:00.961396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.749 [2024-11-21 00:06:00.961411] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:10.749 [2024-11-21 00:06:00.961426] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:10.749 [2024-11-21 00:06:00.961458] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:10.749 [2024-11-21 00:06:00.961475] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:10.749 [2024-11-21 00:06:00.961561] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:10.749 [2024-11-21 00:06:00.961570] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:10.749 [2024-11-21 00:06:00.961579] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:10.749 [2024-11-21 00:06:00.961587] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:10.749 [2024-11-21 00:06:00.961593] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:10.749 [2024-11-21 00:06:00.961600] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:10.749 [2024-11-21 00:06:00.961606] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:10.750 [2024-11-21 00:06:00.961614] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:10.750 [2024-11-21 00:06:00.961620] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:10.750 [2024-11-21 00:06:00.961626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.750 [2024-11-21 00:06:00.961631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:10.750 [2024-11-21 00:06:00.961641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:17:10.750 [2024-11-21 00:06:00.961648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.750 [2024-11-21 00:06:00.961715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.750 [2024-11-21 00:06:00.961721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:10.750 [2024-11-21 00:06:00.961730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:10.750 [2024-11-21 00:06:00.961738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.750 [2024-11-21 00:06:00.961817] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:10.750 [2024-11-21 00:06:00.961825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:10.750 [2024-11-21 00:06:00.961831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.750 [2024-11-21 00:06:00.961840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.750 [2024-11-21 00:06:00.961849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:10.750 [2024-11-21 00:06:00.961854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:10.750 [2024-11-21 00:06:00.961860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:10.750 [2024-11-21 00:06:00.961865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:10.750 [2024-11-21 00:06:00.961872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:10.750 [2024-11-21 00:06:00.961879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.750 [2024-11-21 00:06:00.961884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:10.750 [2024-11-21 00:06:00.961890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:10.750 [2024-11-21 00:06:00.961895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.750 [2024-11-21 00:06:00.961901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:10.750 [2024-11-21 00:06:00.961906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:10.750 [2024-11-21 00:06:00.961913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.750 [2024-11-21 00:06:00.961920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:10.750 [2024-11-21 00:06:00.961925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:10.750 [2024-11-21 00:06:00.961930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.750 [2024-11-21 00:06:00.961935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:10.750 [2024-11-21 00:06:00.961940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:10.750 [2024-11-21 00:06:00.961945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.750 [2024-11-21 00:06:00.961950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:10.750 [2024-11-21 00:06:00.961955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:10.750 [2024-11-21 00:06:00.961961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.750 [2024-11-21 00:06:00.961970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:10.750 [2024-11-21 00:06:00.961975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:10.750 [2024-11-21 00:06:00.961980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.750 [2024-11-21 00:06:00.961985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:10.750 [2024-11-21 00:06:00.961990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:10.750 [2024-11-21 00:06:00.961996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.750 [2024-11-21 00:06:00.962002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:10.750 [2024-11-21 00:06:00.962007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:10.750 [2024-11-21 00:06:00.962012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.750 [2024-11-21 00:06:00.962016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:10.750 [2024-11-21 00:06:00.962022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:10.750 [2024-11-21 00:06:00.962026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.750 [2024-11-21 00:06:00.962032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:10.750 [2024-11-21 00:06:00.962037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:10.750 [2024-11-21 00:06:00.962042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.750 [2024-11-21 00:06:00.962046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:10.750 [2024-11-21 00:06:00.962053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:10.750 [2024-11-21 00:06:00.962058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.750 [2024-11-21 00:06:00.962063] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:10.750 [2024-11-21 00:06:00.962069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:10.750 [2024-11-21 00:06:00.962077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.750 [2024-11-21 00:06:00.962083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.750 [2024-11-21 00:06:00.962091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:10.750 [2024-11-21 00:06:00.962096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:10.750 [2024-11-21 00:06:00.962101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:10.750 [2024-11-21 00:06:00.962107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:10.750 [2024-11-21 00:06:00.962112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:10.750 [2024-11-21 00:06:00.962117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:10.750 [2024-11-21 00:06:00.962123] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:10.750 [2024-11-21 00:06:00.962130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.750 [2024-11-21 00:06:00.962137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:10.750 [2024-11-21 00:06:00.962143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:10.750 [2024-11-21 00:06:00.962150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:10.750 [2024-11-21 00:06:00.962155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:10.750 [2024-11-21 00:06:00.962161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:10.750 [2024-11-21 00:06:00.962167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:10.750 [2024-11-21 00:06:00.962172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:10.750 [2024-11-21 00:06:00.962182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:10.750 [2024-11-21 00:06:00.962188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:10.750 [2024-11-21 00:06:00.962193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:10.750 [2024-11-21 00:06:00.962199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:10.750 [2024-11-21 00:06:00.962204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:10.750 [2024-11-21 00:06:00.962210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:10.750 [2024-11-21 00:06:00.962215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:10.750 [2024-11-21 00:06:00.962221] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:10.750 [2024-11-21 00:06:00.962230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.750 [2024-11-21 00:06:00.962238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:10.750 [2024-11-21 00:06:00.962244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:10.750 [2024-11-21 00:06:00.962251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:10.750 [2024-11-21 00:06:00.962257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:10.750 [2024-11-21 00:06:00.962263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.750 [2024-11-21 00:06:00.962269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:10.750 [2024-11-21 00:06:00.962276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:17:10.750 [2024-11-21 00:06:00.962282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.750 [2024-11-21 00:06:00.981070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.750 [2024-11-21 00:06:00.981196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:10.750 [2024-11-21 00:06:00.981221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.736 ms 00:17:10.750 [2024-11-21 00:06:00.981227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.750 [2024-11-21 00:06:00.981350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.750 [2024-11-21 00:06:00.981361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:10.750 [2024-11-21 00:06:00.981373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:10.750 [2024-11-21 00:06:00.981382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.750 [2024-11-21 00:06:00.991413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:00.991537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:10.751 [2024-11-21 00:06:00.991553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.013 ms 00:17:10.751 [2024-11-21 00:06:00.991562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:00.991625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:00.991640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:10.751 [2024-11-21 00:06:00.991652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:10.751 [2024-11-21 00:06:00.991660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:00.992078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:00.992101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:10.751 [2024-11-21 00:06:00.992119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:17:10.751 [2024-11-21 00:06:00.992129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:00.992281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:00.992316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:10.751 [2024-11-21 00:06:00.992327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:17:10.751 [2024-11-21 00:06:00.992345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:00.998632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:00.998729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:10.751 [2024-11-21 00:06:00.998741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.263 ms 00:17:10.751 [2024-11-21 00:06:00.998747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.001573] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:10.751 [2024-11-21 00:06:01.001605] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:10.751 [2024-11-21 00:06:01.001617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:01.001624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:10.751 [2024-11-21 00:06:01.001631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.799 ms 00:17:10.751 [2024-11-21 00:06:01.001636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.013250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:01.013364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:10.751 [2024-11-21 00:06:01.013377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.565 ms 00:17:10.751 [2024-11-21 00:06:01.013384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.015085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:01.015112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:10.751 [2024-11-21 00:06:01.015120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:17:10.751 [2024-11-21 00:06:01.015126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.016640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:01.016731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:10.751 [2024-11-21 00:06:01.016743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.483 ms 00:17:10.751 [2024-11-21 00:06:01.016754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.017003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:01.017016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:10.751 [2024-11-21 00:06:01.017023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:17:10.751 [2024-11-21 00:06:01.017031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.035191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:01.035227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:10.751 [2024-11-21 00:06:01.035237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.142 ms 00:17:10.751 [2024-11-21 00:06:01.035243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.041334] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:10.751 [2024-11-21 00:06:01.056302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:01.056331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:10.751 [2024-11-21 00:06:01.056340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.001 ms 00:17:10.751 [2024-11-21 00:06:01.056346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.056438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:01.056447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:10.751 [2024-11-21 00:06:01.056454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:10.751 [2024-11-21 00:06:01.056466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.056511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:01.056518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:10.751 [2024-11-21 00:06:01.056528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:10.751 [2024-11-21 00:06:01.056534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.056552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:01.056559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:10.751 [2024-11-21 00:06:01.056565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.751 [2024-11-21 00:06:01.056572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.056601] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:10.751 [2024-11-21 00:06:01.056609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:01.056618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:10.751 [2024-11-21 00:06:01.056625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:10.751 [2024-11-21 00:06:01.056631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.060763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:01.060792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:10.751 [2024-11-21 00:06:01.060800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.117 ms 00:17:10.751 [2024-11-21 00:06:01.060807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.060882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.751 [2024-11-21 00:06:01.060892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:10.751 [2024-11-21 00:06:01.060899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:10.751 [2024-11-21 00:06:01.060910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.751 [2024-11-21 00:06:01.061695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:10.751 [2024-11-21 00:06:01.062524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 115.016 ms, result 0 00:17:10.751 [2024-11-21 00:06:01.063387] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:10.751 [2024-11-21 00:06:01.073188] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:11.684  [2024-11-21T00:06:03.486Z] Copying: 15/256 [MB] (15 MBps) [2024-11-21T00:06:04.420Z] Copying: 27/256 [MB] (11 MBps) [2024-11-21T00:06:05.359Z] Copying: 38/256 [MB] (11 MBps) [2024-11-21T00:06:06.299Z] Copying: 55/256 [MB] (17 MBps) [2024-11-21T00:06:07.242Z] Copying: 70/256 [MB] (14 MBps) [2024-11-21T00:06:08.293Z] Copying: 81/256 [MB] (11 MBps) [2024-11-21T00:06:09.229Z] Copying: 92/256 [MB] (10 MBps) [2024-11-21T00:06:10.161Z] Copying: 103/256 [MB] (11 MBps) [2024-11-21T00:06:11.098Z] Copying: 114/256 [MB] (11 MBps) [2024-11-21T00:06:12.480Z] Copying: 127/256 [MB] (12 MBps) [2024-11-21T00:06:13.423Z] Copying: 144/256 [MB] (17 MBps) [2024-11-21T00:06:14.367Z] Copying: 158/256 [MB] (13 MBps) [2024-11-21T00:06:15.304Z] Copying: 175/256 [MB] (17 MBps) [2024-11-21T00:06:16.250Z] Copying: 187/256 [MB] (11 MBps) [2024-11-21T00:06:17.193Z] Copying: 205/256 [MB] (17 MBps) [2024-11-21T00:06:18.138Z] Copying: 215/256 [MB] (10 MBps) [2024-11-21T00:06:19.517Z] Copying: 227/256 [MB] (11 MBps) [2024-11-21T00:06:20.087Z] Copying: 238/256 [MB] (11 MBps) [2024-11-21T00:06:20.660Z] Copying: 250/256 [MB] (11 MBps) [2024-11-21T00:06:20.660Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-21 00:06:20.633718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:30.239 [2024-11-21 00:06:20.636091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.239 [2024-11-21 00:06:20.636134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:30.239 [2024-11-21 00:06:20.636152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:30.239 [2024-11-21 00:06:20.636166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.239 [2024-11-21 00:06:20.636189] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:30.239 [2024-11-21 00:06:20.637141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.239 [2024-11-21 00:06:20.637177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:30.239 [2024-11-21 00:06:20.637189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:17:30.239 [2024-11-21 00:06:20.637217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.239 [2024-11-21 00:06:20.637515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.239 [2024-11-21 00:06:20.637527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:30.239 [2024-11-21 00:06:20.637537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:17:30.239 [2024-11-21 00:06:20.637545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.239 [2024-11-21 00:06:20.641463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.239 [2024-11-21 00:06:20.641593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:30.239 [2024-11-21 00:06:20.641655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.892 ms 00:17:30.239 [2024-11-21 00:06:20.641679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.239 [2024-11-21 00:06:20.648693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.239 [2024-11-21 00:06:20.648842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:30.239 [2024-11-21 00:06:20.648918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.958 ms 00:17:30.239 [2024-11-21 00:06:20.648942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.239 [2024-11-21 00:06:20.652197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.239 [2024-11-21 00:06:20.652376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:30.239 [2024-11-21 00:06:20.652510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.191 ms 00:17:30.239 [2024-11-21 00:06:20.652539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.500 [2024-11-21 00:06:20.658188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.500 [2024-11-21 00:06:20.658367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:30.500 [2024-11-21 00:06:20.658446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.597 ms 00:17:30.500 [2024-11-21 00:06:20.658471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.500 [2024-11-21 00:06:20.658649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.500 [2024-11-21 00:06:20.658749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:30.500 [2024-11-21 00:06:20.658774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:30.500 [2024-11-21 00:06:20.658795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.500 [2024-11-21 00:06:20.661680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.500 [2024-11-21 00:06:20.661824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:30.500 [2024-11-21 00:06:20.661841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.851 ms 00:17:30.500 [2024-11-21 00:06:20.661849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.500 [2024-11-21 00:06:20.664463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.500 [2024-11-21 00:06:20.664500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:30.500 [2024-11-21 00:06:20.664510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.560 ms 00:17:30.500 [2024-11-21 00:06:20.664517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.500 [2024-11-21 00:06:20.666897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.500 [2024-11-21 00:06:20.666925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:30.500 [2024-11-21 00:06:20.666934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.333 ms 00:17:30.500 [2024-11-21 00:06:20.666942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.500 [2024-11-21 00:06:20.669249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.500 [2024-11-21 00:06:20.669395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:30.500 [2024-11-21 00:06:20.669450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.227 ms 00:17:30.500 [2024-11-21 00:06:20.669472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.500 [2024-11-21 00:06:20.669543] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:30.500 [2024-11-21 00:06:20.669585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:30.500 [2024-11-21 00:06:20.669617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:30.500 [2024-11-21 00:06:20.669646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:30.500 [2024-11-21 00:06:20.669726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:30.500 [2024-11-21 00:06:20.669756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:30.500 [2024-11-21 00:06:20.669784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:30.500 [2024-11-21 00:06:20.669813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:30.500 [2024-11-21 00:06:20.669842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:30.500 [2024-11-21 00:06:20.669902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:30.500 [2024-11-21 00:06:20.669931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:30.500 [2024-11-21 00:06:20.669960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:30.500 [2024-11-21 00:06:20.669988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.670974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.671998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.672957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.673019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.673049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.673078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.673128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.673157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.673186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.673269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.673407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.673496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.673549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:30.501 [2024-11-21 00:06:20.673590] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:30.501 [2024-11-21 00:06:20.673727] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2907dcd9-a39c-45f6-a091-f2f3bc6dbd9e 00:17:30.502 [2024-11-21 00:06:20.673760] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:30.502 [2024-11-21 00:06:20.673780] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:30.502 [2024-11-21 00:06:20.673830] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:30.502 [2024-11-21 00:06:20.673853] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:30.502 [2024-11-21 00:06:20.673872] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:30.502 [2024-11-21 00:06:20.673892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:30.502 [2024-11-21 00:06:20.673950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:30.502 [2024-11-21 00:06:20.673971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:30.502 [2024-11-21 00:06:20.674019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:30.502 [2024-11-21 00:06:20.674046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.502 [2024-11-21 00:06:20.674067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:30.502 [2024-11-21 00:06:20.674118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.504 ms 00:17:30.502 [2024-11-21 00:06:20.674140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.677350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.502 [2024-11-21 00:06:20.677482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:30.502 [2024-11-21 00:06:20.677538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.068 ms 00:17:30.502 [2024-11-21 00:06:20.677562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.677755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.502 [2024-11-21 00:06:20.677797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:30.502 [2024-11-21 00:06:20.677862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:17:30.502 [2024-11-21 00:06:20.677885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.687480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.502 [2024-11-21 00:06:20.687638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.502 [2024-11-21 00:06:20.687692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.502 [2024-11-21 00:06:20.687704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.687791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.502 [2024-11-21 00:06:20.687805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.502 [2024-11-21 00:06:20.687813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.502 [2024-11-21 00:06:20.687822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.687875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.502 [2024-11-21 00:06:20.687885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.502 [2024-11-21 00:06:20.687893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.502 [2024-11-21 00:06:20.687901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.687920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.502 [2024-11-21 00:06:20.687929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.502 [2024-11-21 00:06:20.687945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.502 [2024-11-21 00:06:20.687953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.707868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.502 [2024-11-21 00:06:20.707911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.502 [2024-11-21 00:06:20.707925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.502 [2024-11-21 00:06:20.707935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.723478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.502 [2024-11-21 00:06:20.723541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.502 [2024-11-21 00:06:20.723554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.502 [2024-11-21 00:06:20.723565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.723625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.502 [2024-11-21 00:06:20.723637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.502 [2024-11-21 00:06:20.723647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.502 [2024-11-21 00:06:20.723656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.723691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.502 [2024-11-21 00:06:20.723702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.502 [2024-11-21 00:06:20.723711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.502 [2024-11-21 00:06:20.723723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.723807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.502 [2024-11-21 00:06:20.723818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.502 [2024-11-21 00:06:20.723833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.502 [2024-11-21 00:06:20.723842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.723875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.502 [2024-11-21 00:06:20.723886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:30.502 [2024-11-21 00:06:20.723894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.502 [2024-11-21 00:06:20.723903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.723968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.502 [2024-11-21 00:06:20.723979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.502 [2024-11-21 00:06:20.723988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.502 [2024-11-21 00:06:20.723996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.724058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.502 [2024-11-21 00:06:20.724071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.502 [2024-11-21 00:06:20.724081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.502 [2024-11-21 00:06:20.724094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.502 [2024-11-21 00:06:20.724276] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.152 ms, result 0 00:17:30.764 00:17:30.764 00:17:30.764 00:06:21 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:30.764 00:06:21 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:31.336 00:06:21 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:31.336 [2024-11-21 00:06:21.673828] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:31.336 [2024-11-21 00:06:21.673975] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85652 ] 00:17:31.596 [2024-11-21 00:06:21.813170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.596 [2024-11-21 00:06:21.884919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.857 [2024-11-21 00:06:22.032048] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.857 [2024-11-21 00:06:22.032439] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.857 [2024-11-21 00:06:22.196256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.857 [2024-11-21 00:06:22.196501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:31.857 [2024-11-21 00:06:22.196712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:31.857 [2024-11-21 00:06:22.196757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.857 [2024-11-21 00:06:22.200043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.857 [2024-11-21 00:06:22.200247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.857 [2024-11-21 00:06:22.200691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.222 ms 00:17:31.857 [2024-11-21 00:06:22.200747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.857 [2024-11-21 00:06:22.200980] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:31.857 [2024-11-21 00:06:22.201483] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:31.857 [2024-11-21 00:06:22.201574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.857 [2024-11-21 00:06:22.201663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.857 [2024-11-21 00:06:22.201698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:17:31.857 [2024-11-21 00:06:22.201720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.857 [2024-11-21 00:06:22.203997] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:31.857 [2024-11-21 00:06:22.208937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.857 [2024-11-21 00:06:22.209099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:31.857 [2024-11-21 00:06:22.209165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.943 ms 00:17:31.857 [2024-11-21 00:06:22.209192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.857 [2024-11-21 00:06:22.209321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.857 [2024-11-21 00:06:22.209355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:31.857 [2024-11-21 00:06:22.209436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:31.857 [2024-11-21 00:06:22.209460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.857 [2024-11-21 00:06:22.220828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.857 [2024-11-21 00:06:22.220981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.857 [2024-11-21 00:06:22.221040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.299 ms 00:17:31.857 [2024-11-21 00:06:22.221063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.857 [2024-11-21 00:06:22.221280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.857 [2024-11-21 00:06:22.221424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.857 [2024-11-21 00:06:22.221487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:17:31.857 [2024-11-21 00:06:22.221519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.857 [2024-11-21 00:06:22.221717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.857 [2024-11-21 00:06:22.221816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:31.857 [2024-11-21 00:06:22.221877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:31.857 [2024-11-21 00:06:22.221901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.857 [2024-11-21 00:06:22.221945] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:31.857 [2024-11-21 00:06:22.224640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.857 [2024-11-21 00:06:22.224784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.857 [2024-11-21 00:06:22.224844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.704 ms 00:17:31.857 [2024-11-21 00:06:22.224867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.857 [2024-11-21 00:06:22.225337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.857 [2024-11-21 00:06:22.225486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:31.857 [2024-11-21 00:06:22.225557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:31.857 [2024-11-21 00:06:22.225585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.857 [2024-11-21 00:06:22.225656] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:31.857 [2024-11-21 00:06:22.225733] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:31.857 [2024-11-21 00:06:22.225999] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:31.857 [2024-11-21 00:06:22.226103] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:31.857 [2024-11-21 00:06:22.226276] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:31.857 [2024-11-21 00:06:22.226345] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:31.857 [2024-11-21 00:06:22.226383] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:31.857 [2024-11-21 00:06:22.226419] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:31.857 [2024-11-21 00:06:22.226450] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:31.857 [2024-11-21 00:06:22.226483] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:31.857 [2024-11-21 00:06:22.226563] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:31.857 [2024-11-21 00:06:22.226588] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:31.857 [2024-11-21 00:06:22.226608] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:31.857 [2024-11-21 00:06:22.226630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.857 [2024-11-21 00:06:22.226656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:31.857 [2024-11-21 00:06:22.226681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:17:31.857 [2024-11-21 00:06:22.226821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.857 [2024-11-21 00:06:22.226938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.857 [2024-11-21 00:06:22.226966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:31.857 [2024-11-21 00:06:22.227033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:31.857 [2024-11-21 00:06:22.227046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.857 [2024-11-21 00:06:22.227160] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:31.857 [2024-11-21 00:06:22.227175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:31.857 [2024-11-21 00:06:22.227190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.857 [2024-11-21 00:06:22.227206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.857 [2024-11-21 00:06:22.227218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:31.857 [2024-11-21 00:06:22.227225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:31.857 [2024-11-21 00:06:22.227234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:31.857 [2024-11-21 00:06:22.227241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:31.857 [2024-11-21 00:06:22.227252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:31.857 [2024-11-21 00:06:22.227259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.857 [2024-11-21 00:06:22.227266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:31.857 [2024-11-21 00:06:22.227273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:31.857 [2024-11-21 00:06:22.227281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.857 [2024-11-21 00:06:22.227288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:31.857 [2024-11-21 00:06:22.227320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:31.858 [2024-11-21 00:06:22.227328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.858 [2024-11-21 00:06:22.227335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:31.858 [2024-11-21 00:06:22.227342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:31.858 [2024-11-21 00:06:22.227350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.858 [2024-11-21 00:06:22.227359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:31.858 [2024-11-21 00:06:22.227366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:31.858 [2024-11-21 00:06:22.227373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.858 [2024-11-21 00:06:22.227381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:31.858 [2024-11-21 00:06:22.227389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:31.858 [2024-11-21 00:06:22.227402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.858 [2024-11-21 00:06:22.227411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:31.858 [2024-11-21 00:06:22.227420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:31.858 [2024-11-21 00:06:22.227427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.858 [2024-11-21 00:06:22.227435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:31.858 [2024-11-21 00:06:22.227442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:31.858 [2024-11-21 00:06:22.227449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.858 [2024-11-21 00:06:22.227456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:31.858 [2024-11-21 00:06:22.227469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:31.858 [2024-11-21 00:06:22.227477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.858 [2024-11-21 00:06:22.227484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:31.858 [2024-11-21 00:06:22.227491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:31.858 [2024-11-21 00:06:22.227497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.858 [2024-11-21 00:06:22.227504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:31.858 [2024-11-21 00:06:22.227513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:31.858 [2024-11-21 00:06:22.227519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.858 [2024-11-21 00:06:22.227529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:31.858 [2024-11-21 00:06:22.227537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:31.858 [2024-11-21 00:06:22.227545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.858 [2024-11-21 00:06:22.227551] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:31.858 [2024-11-21 00:06:22.227564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:31.858 [2024-11-21 00:06:22.227572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.858 [2024-11-21 00:06:22.227580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.858 [2024-11-21 00:06:22.227588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:31.858 [2024-11-21 00:06:22.227596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:31.858 [2024-11-21 00:06:22.227605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:31.858 [2024-11-21 00:06:22.227612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:31.858 [2024-11-21 00:06:22.227619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:31.858 [2024-11-21 00:06:22.227626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:31.858 [2024-11-21 00:06:22.227636] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:31.858 [2024-11-21 00:06:22.227649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.858 [2024-11-21 00:06:22.227660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:31.858 [2024-11-21 00:06:22.227672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:31.858 [2024-11-21 00:06:22.227680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:31.858 [2024-11-21 00:06:22.227689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:31.858 [2024-11-21 00:06:22.227697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:31.858 [2024-11-21 00:06:22.227706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:31.858 [2024-11-21 00:06:22.227715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:31.858 [2024-11-21 00:06:22.227731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:31.858 [2024-11-21 00:06:22.227740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:31.858 [2024-11-21 00:06:22.227750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:31.858 [2024-11-21 00:06:22.227759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:31.858 [2024-11-21 00:06:22.227767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:31.858 [2024-11-21 00:06:22.227776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:31.858 [2024-11-21 00:06:22.227784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:31.858 [2024-11-21 00:06:22.227793] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:31.858 [2024-11-21 00:06:22.227803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.858 [2024-11-21 00:06:22.227814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:31.858 [2024-11-21 00:06:22.227826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:31.858 [2024-11-21 00:06:22.227834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:31.858 [2024-11-21 00:06:22.227842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:31.858 [2024-11-21 00:06:22.227851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.858 [2024-11-21 00:06:22.227861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:31.858 [2024-11-21 00:06:22.227872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:17:31.858 [2024-11-21 00:06:22.227881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.858 [2024-11-21 00:06:22.258346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.858 [2024-11-21 00:06:22.258404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.858 [2024-11-21 00:06:22.258422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.402 ms 00:17:31.858 [2024-11-21 00:06:22.258433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.858 [2024-11-21 00:06:22.258627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.858 [2024-11-21 00:06:22.258645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:31.858 [2024-11-21 00:06:22.258658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:31.858 [2024-11-21 00:06:22.258674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.274526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.274571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.120 [2024-11-21 00:06:22.274589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.823 ms 00:17:32.120 [2024-11-21 00:06:22.274598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.274679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.274690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.120 [2024-11-21 00:06:22.274705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:32.120 [2024-11-21 00:06:22.274718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.275433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.275464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.120 [2024-11-21 00:06:22.275477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:17:32.120 [2024-11-21 00:06:22.275488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.275662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.275683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.120 [2024-11-21 00:06:22.275697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:32.120 [2024-11-21 00:06:22.275713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.285953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.286146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.120 [2024-11-21 00:06:22.286165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.213 ms 00:17:32.120 [2024-11-21 00:06:22.286175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.290922] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:32.120 [2024-11-21 00:06:22.290976] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:32.120 [2024-11-21 00:06:22.290991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.291001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:32.120 [2024-11-21 00:06:22.291011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.654 ms 00:17:32.120 [2024-11-21 00:06:22.291020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.307436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.307484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:32.120 [2024-11-21 00:06:22.307497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.332 ms 00:17:32.120 [2024-11-21 00:06:22.307505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.310649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.310707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:32.120 [2024-11-21 00:06:22.310719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.046 ms 00:17:32.120 [2024-11-21 00:06:22.310729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.313472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.313520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:32.120 [2024-11-21 00:06:22.313542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:17:32.120 [2024-11-21 00:06:22.313550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.313916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.313931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:32.120 [2024-11-21 00:06:22.313949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:17:32.120 [2024-11-21 00:06:22.313958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.345865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.345930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:32.120 [2024-11-21 00:06:22.345945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.876 ms 00:17:32.120 [2024-11-21 00:06:22.345954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.354395] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:32.120 [2024-11-21 00:06:22.379237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.379315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:32.120 [2024-11-21 00:06:22.379331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.184 ms 00:17:32.120 [2024-11-21 00:06:22.379341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.379469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.379485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:32.120 [2024-11-21 00:06:22.379497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:32.120 [2024-11-21 00:06:22.379514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.379588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.379603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:32.120 [2024-11-21 00:06:22.379616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:32.120 [2024-11-21 00:06:22.379624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.379656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.379668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:32.120 [2024-11-21 00:06:22.379677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:32.120 [2024-11-21 00:06:22.379689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.379733] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:32.120 [2024-11-21 00:06:22.379745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.379757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:32.120 [2024-11-21 00:06:22.379768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:32.120 [2024-11-21 00:06:22.379777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.386835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.386885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:32.120 [2024-11-21 00:06:22.386898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.033 ms 00:17:32.120 [2024-11-21 00:06:22.386917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.387020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.120 [2024-11-21 00:06:22.387034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:32.120 [2024-11-21 00:06:22.387045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:32.120 [2024-11-21 00:06:22.387055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.120 [2024-11-21 00:06:22.388788] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:32.120 [2024-11-21 00:06:22.390485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 192.133 ms, result 0 00:17:32.120 [2024-11-21 00:06:22.391818] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:32.120 [2024-11-21 00:06:22.399154] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:32.381  [2024-11-21T00:06:22.802Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-11-21 00:06:22.789092] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:32.381 [2024-11-21 00:06:22.790893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.381 [2024-11-21 00:06:22.791119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:32.381 [2024-11-21 00:06:22.791147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:32.381 [2024-11-21 00:06:22.791157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.381 [2024-11-21 00:06:22.791202] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:32.381 [2024-11-21 00:06:22.792140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.381 [2024-11-21 00:06:22.792187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:32.381 [2024-11-21 00:06:22.792208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:17:32.381 [2024-11-21 00:06:22.792216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.381 [2024-11-21 00:06:22.795324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.381 [2024-11-21 00:06:22.795367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:32.381 [2024-11-21 00:06:22.795379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.078 ms 00:17:32.381 [2024-11-21 00:06:22.795388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.644 [2024-11-21 00:06:22.799888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.644 [2024-11-21 00:06:22.799934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:32.644 [2024-11-21 00:06:22.799946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.474 ms 00:17:32.644 [2024-11-21 00:06:22.799955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.644 [2024-11-21 00:06:22.806892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.644 [2024-11-21 00:06:22.806931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:32.644 [2024-11-21 00:06:22.806944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.904 ms 00:17:32.644 [2024-11-21 00:06:22.806952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.644 [2024-11-21 00:06:22.809599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.644 [2024-11-21 00:06:22.809792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:32.644 [2024-11-21 00:06:22.809809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.582 ms 00:17:32.644 [2024-11-21 00:06:22.809819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.644 [2024-11-21 00:06:22.815623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.644 [2024-11-21 00:06:22.815673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:32.644 [2024-11-21 00:06:22.815694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.674 ms 00:17:32.644 [2024-11-21 00:06:22.815704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.644 [2024-11-21 00:06:22.815844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.644 [2024-11-21 00:06:22.815856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:32.644 [2024-11-21 00:06:22.815874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:32.644 [2024-11-21 00:06:22.815883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.644 [2024-11-21 00:06:22.819501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.644 [2024-11-21 00:06:22.819685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:32.644 [2024-11-21 00:06:22.819703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.597 ms 00:17:32.644 [2024-11-21 00:06:22.819711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.644 [2024-11-21 00:06:22.822805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.644 [2024-11-21 00:06:22.822975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:32.644 [2024-11-21 00:06:22.822991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.992 ms 00:17:32.644 [2024-11-21 00:06:22.822999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.644 [2024-11-21 00:06:22.825478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.644 [2024-11-21 00:06:22.825523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:32.644 [2024-11-21 00:06:22.825533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:17:32.644 [2024-11-21 00:06:22.825540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.644 [2024-11-21 00:06:22.828036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.644 [2024-11-21 00:06:22.828081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:32.644 [2024-11-21 00:06:22.828090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.419 ms 00:17:32.644 [2024-11-21 00:06:22.828098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.644 [2024-11-21 00:06:22.828140] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:32.644 [2024-11-21 00:06:22.828164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:32.644 [2024-11-21 00:06:22.828175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:32.644 [2024-11-21 00:06:22.828184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:32.644 [2024-11-21 00:06:22.828192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:32.644 [2024-11-21 00:06:22.828200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:32.644 [2024-11-21 00:06:22.828208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:32.644 [2024-11-21 00:06:22.828215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:32.644 [2024-11-21 00:06:22.828223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:32.644 [2024-11-21 00:06:22.828231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:32.644 [2024-11-21 00:06:22.828241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:32.645 [2024-11-21 00:06:22.828983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:32.646 [2024-11-21 00:06:22.828999] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:32.646 [2024-11-21 00:06:22.829007] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2907dcd9-a39c-45f6-a091-f2f3bc6dbd9e 00:17:32.646 [2024-11-21 00:06:22.829016] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:32.646 [2024-11-21 00:06:22.829025] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:32.646 [2024-11-21 00:06:22.829037] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:32.646 [2024-11-21 00:06:22.829046] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:32.646 [2024-11-21 00:06:22.829054] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:32.646 [2024-11-21 00:06:22.829063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:32.646 [2024-11-21 00:06:22.829072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:32.646 [2024-11-21 00:06:22.829079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:32.646 [2024-11-21 00:06:22.829086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:32.646 [2024-11-21 00:06:22.829094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.646 [2024-11-21 00:06:22.829101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:32.646 [2024-11-21 00:06:22.829114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:17:32.646 [2024-11-21 00:06:22.829127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.832292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.646 [2024-11-21 00:06:22.832344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:32.646 [2024-11-21 00:06:22.832355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.144 ms 00:17:32.646 [2024-11-21 00:06:22.832363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.832514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.646 [2024-11-21 00:06:22.832531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:32.646 [2024-11-21 00:06:22.832540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:17:32.646 [2024-11-21 00:06:22.832548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.842210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.646 [2024-11-21 00:06:22.842386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.646 [2024-11-21 00:06:22.842447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.646 [2024-11-21 00:06:22.842470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.842565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.646 [2024-11-21 00:06:22.842601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.646 [2024-11-21 00:06:22.842622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.646 [2024-11-21 00:06:22.842641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.842760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.646 [2024-11-21 00:06:22.842790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.646 [2024-11-21 00:06:22.842814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.646 [2024-11-21 00:06:22.842837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.842869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.646 [2024-11-21 00:06:22.842891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.646 [2024-11-21 00:06:22.842925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.646 [2024-11-21 00:06:22.843004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.862849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.646 [2024-11-21 00:06:22.863048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.646 [2024-11-21 00:06:22.863108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.646 [2024-11-21 00:06:22.863142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.878658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.646 [2024-11-21 00:06:22.878859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.646 [2024-11-21 00:06:22.878919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.646 [2024-11-21 00:06:22.878943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.879014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.646 [2024-11-21 00:06:22.879039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.646 [2024-11-21 00:06:22.879061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.646 [2024-11-21 00:06:22.879082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.879131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.646 [2024-11-21 00:06:22.879152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.646 [2024-11-21 00:06:22.879174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.646 [2024-11-21 00:06:22.879248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.879389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.646 [2024-11-21 00:06:22.879404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.646 [2024-11-21 00:06:22.879414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.646 [2024-11-21 00:06:22.879424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.879464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.646 [2024-11-21 00:06:22.879477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:32.646 [2024-11-21 00:06:22.879490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.646 [2024-11-21 00:06:22.879499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.879566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.646 [2024-11-21 00:06:22.879578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.646 [2024-11-21 00:06:22.879587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.646 [2024-11-21 00:06:22.879598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.879659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.646 [2024-11-21 00:06:22.879672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.646 [2024-11-21 00:06:22.879680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.646 [2024-11-21 00:06:22.879692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.646 [2024-11-21 00:06:22.879881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.980 ms, result 0 00:17:32.908 00:17:32.908 00:17:32.908 00:06:23 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85668 00:17:32.908 00:06:23 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85668 00:17:32.908 00:06:23 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:32.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.908 00:06:23 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85668 ']' 00:17:32.908 00:06:23 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.908 00:06:23 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.908 00:06:23 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.908 00:06:23 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.908 00:06:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:32.908 [2024-11-21 00:06:23.278004] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:32.908 [2024-11-21 00:06:23.278367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85668 ] 00:17:33.169 [2024-11-21 00:06:23.410620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.169 [2024-11-21 00:06:23.481892] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.739 00:06:24 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.739 00:06:24 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:33.739 00:06:24 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:33.999 [2024-11-21 00:06:24.335393] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:33.999 [2024-11-21 00:06:24.335472] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:34.262 [2024-11-21 00:06:24.514732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.262 [2024-11-21 00:06:24.514792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:34.262 [2024-11-21 00:06:24.514813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:34.262 [2024-11-21 00:06:24.514824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.262 [2024-11-21 00:06:24.517540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.262 [2024-11-21 00:06:24.517592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.262 [2024-11-21 00:06:24.517606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.694 ms 00:17:34.262 [2024-11-21 00:06:24.517617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.262 [2024-11-21 00:06:24.517713] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:34.262 [2024-11-21 00:06:24.518019] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:34.262 [2024-11-21 00:06:24.518035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.262 [2024-11-21 00:06:24.518045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.262 [2024-11-21 00:06:24.518066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:17:34.262 [2024-11-21 00:06:24.518077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.262 [2024-11-21 00:06:24.520539] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:34.262 [2024-11-21 00:06:24.525253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.262 [2024-11-21 00:06:24.525348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:34.262 [2024-11-21 00:06:24.525364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.705 ms 00:17:34.262 [2024-11-21 00:06:24.525372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.262 [2024-11-21 00:06:24.525459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.262 [2024-11-21 00:06:24.525474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:34.262 [2024-11-21 00:06:24.525488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:34.262 [2024-11-21 00:06:24.525496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.262 [2024-11-21 00:06:24.536844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.262 [2024-11-21 00:06:24.536891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.262 [2024-11-21 00:06:24.536908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.282 ms 00:17:34.262 [2024-11-21 00:06:24.536916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.262 [2024-11-21 00:06:24.537057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.262 [2024-11-21 00:06:24.537071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.262 [2024-11-21 00:06:24.537083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:34.262 [2024-11-21 00:06:24.537091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.262 [2024-11-21 00:06:24.537125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.262 [2024-11-21 00:06:24.537134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:34.262 [2024-11-21 00:06:24.537145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:34.262 [2024-11-21 00:06:24.537160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.262 [2024-11-21 00:06:24.537191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:34.262 [2024-11-21 00:06:24.539862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.262 [2024-11-21 00:06:24.540166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.262 [2024-11-21 00:06:24.540196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.683 ms 00:17:34.262 [2024-11-21 00:06:24.540207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.262 [2024-11-21 00:06:24.540266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.262 [2024-11-21 00:06:24.540279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:34.262 [2024-11-21 00:06:24.540288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:34.262 [2024-11-21 00:06:24.540315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.262 [2024-11-21 00:06:24.540339] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:34.262 [2024-11-21 00:06:24.540369] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:34.262 [2024-11-21 00:06:24.540416] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:34.262 [2024-11-21 00:06:24.540438] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:34.262 [2024-11-21 00:06:24.540557] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:34.262 [2024-11-21 00:06:24.540572] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:34.262 [2024-11-21 00:06:24.540584] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:34.262 [2024-11-21 00:06:24.540601] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:34.262 [2024-11-21 00:06:24.540612] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:34.262 [2024-11-21 00:06:24.540628] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:34.262 [2024-11-21 00:06:24.540637] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:34.262 [2024-11-21 00:06:24.540648] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:34.262 [2024-11-21 00:06:24.540657] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:34.262 [2024-11-21 00:06:24.540669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.262 [2024-11-21 00:06:24.540682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:34.262 [2024-11-21 00:06:24.540697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:17:34.262 [2024-11-21 00:06:24.540710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.262 [2024-11-21 00:06:24.540803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.262 [2024-11-21 00:06:24.540815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:34.262 [2024-11-21 00:06:24.540828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:34.262 [2024-11-21 00:06:24.540837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.262 [2024-11-21 00:06:24.540946] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:34.262 [2024-11-21 00:06:24.540959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:34.262 [2024-11-21 00:06:24.540973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.262 [2024-11-21 00:06:24.540986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.262 [2024-11-21 00:06:24.541003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:34.262 [2024-11-21 00:06:24.541010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:34.262 [2024-11-21 00:06:24.541021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:34.262 [2024-11-21 00:06:24.541029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:34.262 [2024-11-21 00:06:24.541042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:34.262 [2024-11-21 00:06:24.541049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.262 [2024-11-21 00:06:24.541058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:34.263 [2024-11-21 00:06:24.541066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:34.263 [2024-11-21 00:06:24.541076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.263 [2024-11-21 00:06:24.541085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:34.263 [2024-11-21 00:06:24.541095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:34.263 [2024-11-21 00:06:24.541102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.263 [2024-11-21 00:06:24.541113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:34.263 [2024-11-21 00:06:24.541121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:34.263 [2024-11-21 00:06:24.541131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.263 [2024-11-21 00:06:24.541138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:34.263 [2024-11-21 00:06:24.541150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:34.263 [2024-11-21 00:06:24.541157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.263 [2024-11-21 00:06:24.541165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:34.263 [2024-11-21 00:06:24.541173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:34.263 [2024-11-21 00:06:24.541183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.263 [2024-11-21 00:06:24.541189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:34.263 [2024-11-21 00:06:24.541214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:34.263 [2024-11-21 00:06:24.541222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.263 [2024-11-21 00:06:24.541231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:34.263 [2024-11-21 00:06:24.541238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:34.263 [2024-11-21 00:06:24.541248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.263 [2024-11-21 00:06:24.541257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:34.263 [2024-11-21 00:06:24.541267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:34.263 [2024-11-21 00:06:24.541273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.263 [2024-11-21 00:06:24.541282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:34.263 [2024-11-21 00:06:24.541289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:34.263 [2024-11-21 00:06:24.541316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.263 [2024-11-21 00:06:24.541323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:34.263 [2024-11-21 00:06:24.541333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:34.263 [2024-11-21 00:06:24.541340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.263 [2024-11-21 00:06:24.541350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:34.263 [2024-11-21 00:06:24.541356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:34.263 [2024-11-21 00:06:24.541365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.263 [2024-11-21 00:06:24.541373] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:34.263 [2024-11-21 00:06:24.541384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:34.263 [2024-11-21 00:06:24.541394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.263 [2024-11-21 00:06:24.541408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.263 [2024-11-21 00:06:24.541416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:34.263 [2024-11-21 00:06:24.541426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:34.263 [2024-11-21 00:06:24.541433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:34.263 [2024-11-21 00:06:24.541442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:34.263 [2024-11-21 00:06:24.541450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:34.263 [2024-11-21 00:06:24.541463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:34.263 [2024-11-21 00:06:24.541472] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:34.263 [2024-11-21 00:06:24.541486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.263 [2024-11-21 00:06:24.541498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:34.263 [2024-11-21 00:06:24.541508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:34.263 [2024-11-21 00:06:24.541516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:34.263 [2024-11-21 00:06:24.541526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:34.263 [2024-11-21 00:06:24.541534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:34.263 [2024-11-21 00:06:24.541543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:34.263 [2024-11-21 00:06:24.541550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:34.263 [2024-11-21 00:06:24.541559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:34.263 [2024-11-21 00:06:24.541566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:34.263 [2024-11-21 00:06:24.541575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:34.263 [2024-11-21 00:06:24.541583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:34.263 [2024-11-21 00:06:24.541593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:34.263 [2024-11-21 00:06:24.541600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:34.263 [2024-11-21 00:06:24.541612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:34.263 [2024-11-21 00:06:24.541625] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:34.263 [2024-11-21 00:06:24.541635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.263 [2024-11-21 00:06:24.541646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:34.263 [2024-11-21 00:06:24.541657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:34.263 [2024-11-21 00:06:24.541665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:34.263 [2024-11-21 00:06:24.541675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:34.263 [2024-11-21 00:06:24.541682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.263 [2024-11-21 00:06:24.541704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:34.263 [2024-11-21 00:06:24.541716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:17:34.263 [2024-11-21 00:06:24.541727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.263 [2024-11-21 00:06:24.561802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.263 [2024-11-21 00:06:24.561857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.263 [2024-11-21 00:06:24.561869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.988 ms 00:17:34.263 [2024-11-21 00:06:24.561881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.263 [2024-11-21 00:06:24.562023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.263 [2024-11-21 00:06:24.562041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:34.263 [2024-11-21 00:06:24.562054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:34.263 [2024-11-21 00:06:24.562065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.263 [2024-11-21 00:06:24.578465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.263 [2024-11-21 00:06:24.578514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:34.263 [2024-11-21 00:06:24.578525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.375 ms 00:17:34.263 [2024-11-21 00:06:24.578537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.263 [2024-11-21 00:06:24.578607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.263 [2024-11-21 00:06:24.578623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:34.263 [2024-11-21 00:06:24.578633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:34.263 [2024-11-21 00:06:24.578644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.263 [2024-11-21 00:06:24.579363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.263 [2024-11-21 00:06:24.579426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:34.264 [2024-11-21 00:06:24.579438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:17:34.264 [2024-11-21 00:06:24.579453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.264 [2024-11-21 00:06:24.579627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.264 [2024-11-21 00:06:24.579643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:34.264 [2024-11-21 00:06:24.579661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:17:34.264 [2024-11-21 00:06:24.579675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.264 [2024-11-21 00:06:24.600815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.264 [2024-11-21 00:06:24.601031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:34.264 [2024-11-21 00:06:24.601055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.110 ms 00:17:34.264 [2024-11-21 00:06:24.601068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.264 [2024-11-21 00:06:24.605940] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:34.264 [2024-11-21 00:06:24.605997] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:34.264 [2024-11-21 00:06:24.606012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.264 [2024-11-21 00:06:24.606024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:34.264 [2024-11-21 00:06:24.606036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.768 ms 00:17:34.264 [2024-11-21 00:06:24.606047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.264 [2024-11-21 00:06:24.623571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.264 [2024-11-21 00:06:24.623780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:34.264 [2024-11-21 00:06:24.623803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.437 ms 00:17:34.264 [2024-11-21 00:06:24.623819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.264 [2024-11-21 00:06:24.626785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.264 [2024-11-21 00:06:24.626842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:34.264 [2024-11-21 00:06:24.626854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.864 ms 00:17:34.264 [2024-11-21 00:06:24.626864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.264 [2024-11-21 00:06:24.629703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.264 [2024-11-21 00:06:24.629876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:34.264 [2024-11-21 00:06:24.629947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.783 ms 00:17:34.264 [2024-11-21 00:06:24.629974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.264 [2024-11-21 00:06:24.630521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.264 [2024-11-21 00:06:24.630603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:34.264 [2024-11-21 00:06:24.630699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:17:34.264 [2024-11-21 00:06:24.630732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.264 [2024-11-21 00:06:24.662666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.264 [2024-11-21 00:06:24.662857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:34.264 [2024-11-21 00:06:24.662878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.889 ms 00:17:34.264 [2024-11-21 00:06:24.662893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.264 [2024-11-21 00:06:24.671443] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:34.526 [2024-11-21 00:06:24.695610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.526 [2024-11-21 00:06:24.695667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:34.526 [2024-11-21 00:06:24.695684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.619 ms 00:17:34.526 [2024-11-21 00:06:24.695693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.526 [2024-11-21 00:06:24.695801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.526 [2024-11-21 00:06:24.695823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:34.526 [2024-11-21 00:06:24.695837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:34.526 [2024-11-21 00:06:24.695849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.526 [2024-11-21 00:06:24.695925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.526 [2024-11-21 00:06:24.695935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:34.526 [2024-11-21 00:06:24.695952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:34.526 [2024-11-21 00:06:24.695965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.526 [2024-11-21 00:06:24.696005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.526 [2024-11-21 00:06:24.696016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:34.526 [2024-11-21 00:06:24.696031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:34.526 [2024-11-21 00:06:24.696039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.526 [2024-11-21 00:06:24.696084] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:34.526 [2024-11-21 00:06:24.696097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.526 [2024-11-21 00:06:24.696107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:34.526 [2024-11-21 00:06:24.696115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:34.526 [2024-11-21 00:06:24.696126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.526 [2024-11-21 00:06:24.703281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.526 [2024-11-21 00:06:24.703351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:34.526 [2024-11-21 00:06:24.703364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.131 ms 00:17:34.526 [2024-11-21 00:06:24.703375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.526 [2024-11-21 00:06:24.703477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.526 [2024-11-21 00:06:24.703490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:34.526 [2024-11-21 00:06:24.703501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:34.526 [2024-11-21 00:06:24.703514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.526 [2024-11-21 00:06:24.704750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:34.526 [2024-11-21 00:06:24.706244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 189.621 ms, result 0 00:17:34.526 [2024-11-21 00:06:24.708925] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:34.526 Some configs were skipped because the RPC state that can call them passed over. 00:17:34.526 00:06:24 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:34.526 [2024-11-21 00:06:24.942540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.526 [2024-11-21 00:06:24.942712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:34.526 [2024-11-21 00:06:24.942784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.347 ms 00:17:34.526 [2024-11-21 00:06:24.942809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.526 [2024-11-21 00:06:24.942868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.683 ms, result 0 00:17:34.526 true 00:17:34.787 00:06:24 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:34.787 [2024-11-21 00:06:25.158420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.787 [2024-11-21 00:06:25.158582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:34.787 [2024-11-21 00:06:25.158641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.886 ms 00:17:34.787 [2024-11-21 00:06:25.158668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.787 [2024-11-21 00:06:25.158727] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.187 ms, result 0 00:17:34.787 true 00:17:34.787 00:06:25 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85668 00:17:34.787 00:06:25 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85668 ']' 00:17:34.787 00:06:25 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85668 00:17:34.787 00:06:25 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:34.787 00:06:25 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.787 00:06:25 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85668 00:17:35.050 killing process with pid 85668 00:17:35.050 00:06:25 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:35.050 00:06:25 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:35.050 00:06:25 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85668' 00:17:35.050 00:06:25 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85668 00:17:35.050 00:06:25 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85668 00:17:35.050 [2024-11-21 00:06:25.416823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.050 [2024-11-21 00:06:25.416913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:35.050 [2024-11-21 00:06:25.416933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:35.050 [2024-11-21 00:06:25.416943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.050 [2024-11-21 00:06:25.416975] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:35.050 [2024-11-21 00:06:25.417961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.050 [2024-11-21 00:06:25.418001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:35.050 [2024-11-21 00:06:25.418015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:17:35.050 [2024-11-21 00:06:25.418027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.050 [2024-11-21 00:06:25.418364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.050 [2024-11-21 00:06:25.418389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:35.050 [2024-11-21 00:06:25.418399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:17:35.050 [2024-11-21 00:06:25.418416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.050 [2024-11-21 00:06:25.423035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.050 [2024-11-21 00:06:25.423097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:35.050 [2024-11-21 00:06:25.423110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.598 ms 00:17:35.050 [2024-11-21 00:06:25.423124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.050 [2024-11-21 00:06:25.430254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.050 [2024-11-21 00:06:25.430326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:35.050 [2024-11-21 00:06:25.430338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.079 ms 00:17:35.050 [2024-11-21 00:06:25.430353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.050 [2024-11-21 00:06:25.433419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.050 [2024-11-21 00:06:25.433474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:35.050 [2024-11-21 00:06:25.433486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.984 ms 00:17:35.050 [2024-11-21 00:06:25.433497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.050 [2024-11-21 00:06:25.440429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.050 [2024-11-21 00:06:25.440638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:35.050 [2024-11-21 00:06:25.440899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.875 ms 00:17:35.050 [2024-11-21 00:06:25.440948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.050 [2024-11-21 00:06:25.441123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.050 [2024-11-21 00:06:25.441152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:35.050 [2024-11-21 00:06:25.441174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:17:35.050 [2024-11-21 00:06:25.441222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.050 [2024-11-21 00:06:25.445459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.050 [2024-11-21 00:06:25.445648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:35.050 [2024-11-21 00:06:25.445714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.202 ms 00:17:35.050 [2024-11-21 00:06:25.445746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.051 [2024-11-21 00:06:25.448587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.051 [2024-11-21 00:06:25.448743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:35.051 [2024-11-21 00:06:25.448760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.781 ms 00:17:35.051 [2024-11-21 00:06:25.448770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.051 [2024-11-21 00:06:25.450848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.051 [2024-11-21 00:06:25.451397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:35.051 [2024-11-21 00:06:25.451426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.035 ms 00:17:35.051 [2024-11-21 00:06:25.451439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.051 [2024-11-21 00:06:25.453802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.051 [2024-11-21 00:06:25.453864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:35.051 [2024-11-21 00:06:25.453877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.883 ms 00:17:35.051 [2024-11-21 00:06:25.453888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.051 [2024-11-21 00:06:25.453938] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:35.051 [2024-11-21 00:06:25.453959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.453970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.453985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.453994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:35.051 [2024-11-21 00:06:25.454683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:35.052 [2024-11-21 00:06:25.454951] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:35.052 [2024-11-21 00:06:25.454959] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2907dcd9-a39c-45f6-a091-f2f3bc6dbd9e 00:17:35.052 [2024-11-21 00:06:25.454970] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:35.052 [2024-11-21 00:06:25.454978] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:35.052 [2024-11-21 00:06:25.454988] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:35.052 [2024-11-21 00:06:25.454999] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:35.052 [2024-11-21 00:06:25.455008] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:35.052 [2024-11-21 00:06:25.455016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:35.052 [2024-11-21 00:06:25.455025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:35.052 [2024-11-21 00:06:25.455031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:35.052 [2024-11-21 00:06:25.455041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:35.052 [2024-11-21 00:06:25.455049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.052 [2024-11-21 00:06:25.455064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:35.052 [2024-11-21 00:06:25.455072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:17:35.052 [2024-11-21 00:06:25.455085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.052 [2024-11-21 00:06:25.457984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.052 [2024-11-21 00:06:25.458031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:35.052 [2024-11-21 00:06:25.458043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.876 ms 00:17:35.052 [2024-11-21 00:06:25.458057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.052 [2024-11-21 00:06:25.458228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.052 [2024-11-21 00:06:25.458243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:35.052 [2024-11-21 00:06:25.458253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:17:35.052 [2024-11-21 00:06:25.458265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.314 [2024-11-21 00:06:25.469081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.314 [2024-11-21 00:06:25.469136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.314 [2024-11-21 00:06:25.469149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.314 [2024-11-21 00:06:25.469160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.314 [2024-11-21 00:06:25.469320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.314 [2024-11-21 00:06:25.469335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.314 [2024-11-21 00:06:25.469346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.314 [2024-11-21 00:06:25.469361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.314 [2024-11-21 00:06:25.469417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.314 [2024-11-21 00:06:25.469430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.314 [2024-11-21 00:06:25.469441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.314 [2024-11-21 00:06:25.469452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.314 [2024-11-21 00:06:25.469478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.314 [2024-11-21 00:06:25.469490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.314 [2024-11-21 00:06:25.469498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.314 [2024-11-21 00:06:25.469508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.314 [2024-11-21 00:06:25.488964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.314 [2024-11-21 00:06:25.489032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.314 [2024-11-21 00:06:25.489043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.314 [2024-11-21 00:06:25.489054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.314 [2024-11-21 00:06:25.504530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.314 [2024-11-21 00:06:25.504593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.314 [2024-11-21 00:06:25.504606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.314 [2024-11-21 00:06:25.504621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.314 [2024-11-21 00:06:25.504700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.314 [2024-11-21 00:06:25.504726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.314 [2024-11-21 00:06:25.504741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.314 [2024-11-21 00:06:25.504756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.314 [2024-11-21 00:06:25.504798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.314 [2024-11-21 00:06:25.504810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.314 [2024-11-21 00:06:25.504819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.314 [2024-11-21 00:06:25.504831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.314 [2024-11-21 00:06:25.504918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.314 [2024-11-21 00:06:25.504944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.314 [2024-11-21 00:06:25.504954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.314 [2024-11-21 00:06:25.504965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.314 [2024-11-21 00:06:25.505006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.314 [2024-11-21 00:06:25.505021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:35.314 [2024-11-21 00:06:25.505030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.314 [2024-11-21 00:06:25.505043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.314 [2024-11-21 00:06:25.505106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.314 [2024-11-21 00:06:25.505121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.314 [2024-11-21 00:06:25.505132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.314 [2024-11-21 00:06:25.505148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.314 [2024-11-21 00:06:25.505241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.314 [2024-11-21 00:06:25.505260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.314 [2024-11-21 00:06:25.505272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.314 [2024-11-21 00:06:25.505284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.314 [2024-11-21 00:06:25.505534] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.662 ms, result 0 00:17:35.576 00:06:25 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:35.576 [2024-11-21 00:06:25.931938] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:35.576 [2024-11-21 00:06:25.932084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85716 ] 00:17:35.838 [2024-11-21 00:06:26.070442] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.838 [2024-11-21 00:06:26.140805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.101 [2024-11-21 00:06:26.288495] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:36.102 [2024-11-21 00:06:26.288591] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:36.102 [2024-11-21 00:06:26.452908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.102 [2024-11-21 00:06:26.452969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:36.102 [2024-11-21 00:06:26.452986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:36.102 [2024-11-21 00:06:26.452996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.102 [2024-11-21 00:06:26.455731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.102 [2024-11-21 00:06:26.455788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:36.102 [2024-11-21 00:06:26.455802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.714 ms 00:17:36.102 [2024-11-21 00:06:26.455814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.102 [2024-11-21 00:06:26.455911] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:36.102 [2024-11-21 00:06:26.456337] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:36.102 [2024-11-21 00:06:26.456380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.102 [2024-11-21 00:06:26.456390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:36.102 [2024-11-21 00:06:26.456405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:17:36.102 [2024-11-21 00:06:26.456414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.102 [2024-11-21 00:06:26.458734] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:36.102 [2024-11-21 00:06:26.463506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.102 [2024-11-21 00:06:26.463553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:36.102 [2024-11-21 00:06:26.463575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.775 ms 00:17:36.102 [2024-11-21 00:06:26.463586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.102 [2024-11-21 00:06:26.463672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.102 [2024-11-21 00:06:26.463683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:36.102 [2024-11-21 00:06:26.463693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:36.102 [2024-11-21 00:06:26.463702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.102 [2024-11-21 00:06:26.474989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.102 [2024-11-21 00:06:26.475034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:36.102 [2024-11-21 00:06:26.475046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.242 ms 00:17:36.102 [2024-11-21 00:06:26.475054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.102 [2024-11-21 00:06:26.475198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.102 [2024-11-21 00:06:26.475210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:36.102 [2024-11-21 00:06:26.475224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:36.102 [2024-11-21 00:06:26.475233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.102 [2024-11-21 00:06:26.475262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.102 [2024-11-21 00:06:26.475273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:36.102 [2024-11-21 00:06:26.475291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:36.102 [2024-11-21 00:06:26.475330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.102 [2024-11-21 00:06:26.475354] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:36.102 [2024-11-21 00:06:26.478019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.102 [2024-11-21 00:06:26.478058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:36.102 [2024-11-21 00:06:26.478070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.672 ms 00:17:36.102 [2024-11-21 00:06:26.478078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.102 [2024-11-21 00:06:26.478123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.102 [2024-11-21 00:06:26.478136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:36.102 [2024-11-21 00:06:26.478148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:36.102 [2024-11-21 00:06:26.478156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.102 [2024-11-21 00:06:26.478176] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:36.102 [2024-11-21 00:06:26.478200] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:36.102 [2024-11-21 00:06:26.478246] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:36.102 [2024-11-21 00:06:26.478269] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:36.102 [2024-11-21 00:06:26.478404] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:36.102 [2024-11-21 00:06:26.478419] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:36.102 [2024-11-21 00:06:26.478438] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:36.102 [2024-11-21 00:06:26.478450] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:36.102 [2024-11-21 00:06:26.478460] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:36.102 [2024-11-21 00:06:26.478469] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:36.102 [2024-11-21 00:06:26.478477] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:36.102 [2024-11-21 00:06:26.478486] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:36.102 [2024-11-21 00:06:26.478494] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:36.102 [2024-11-21 00:06:26.478504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.102 [2024-11-21 00:06:26.478519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:36.102 [2024-11-21 00:06:26.478533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:17:36.102 [2024-11-21 00:06:26.478541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.102 [2024-11-21 00:06:26.478629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.102 [2024-11-21 00:06:26.478640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:36.102 [2024-11-21 00:06:26.478648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:36.102 [2024-11-21 00:06:26.478657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.102 [2024-11-21 00:06:26.478760] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:36.102 [2024-11-21 00:06:26.478781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:36.102 [2024-11-21 00:06:26.478791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:36.102 [2024-11-21 00:06:26.478810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.102 [2024-11-21 00:06:26.478818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:36.102 [2024-11-21 00:06:26.478827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:36.102 [2024-11-21 00:06:26.478836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:36.102 [2024-11-21 00:06:26.478844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:36.102 [2024-11-21 00:06:26.478857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:36.102 [2024-11-21 00:06:26.478865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:36.102 [2024-11-21 00:06:26.478874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:36.102 [2024-11-21 00:06:26.478885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:36.102 [2024-11-21 00:06:26.478896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:36.102 [2024-11-21 00:06:26.478903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:36.102 [2024-11-21 00:06:26.478910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:36.102 [2024-11-21 00:06:26.478918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.102 [2024-11-21 00:06:26.478925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:36.102 [2024-11-21 00:06:26.478932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:36.102 [2024-11-21 00:06:26.478940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.102 [2024-11-21 00:06:26.478948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:36.102 [2024-11-21 00:06:26.478956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:36.102 [2024-11-21 00:06:26.478964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.102 [2024-11-21 00:06:26.478971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:36.102 [2024-11-21 00:06:26.478978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:36.102 [2024-11-21 00:06:26.478990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.102 [2024-11-21 00:06:26.478997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:36.102 [2024-11-21 00:06:26.479004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:36.102 [2024-11-21 00:06:26.479011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.102 [2024-11-21 00:06:26.479018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:36.102 [2024-11-21 00:06:26.479026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:36.102 [2024-11-21 00:06:26.479035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.102 [2024-11-21 00:06:26.479042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:36.102 [2024-11-21 00:06:26.479049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:36.102 [2024-11-21 00:06:26.479056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:36.102 [2024-11-21 00:06:26.479063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:36.102 [2024-11-21 00:06:26.479070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:36.102 [2024-11-21 00:06:26.479077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:36.102 [2024-11-21 00:06:26.479083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:36.103 [2024-11-21 00:06:26.479091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:36.103 [2024-11-21 00:06:26.479099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.103 [2024-11-21 00:06:26.479108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:36.103 [2024-11-21 00:06:26.479115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:36.103 [2024-11-21 00:06:26.479121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.103 [2024-11-21 00:06:26.479131] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:36.103 [2024-11-21 00:06:26.479140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:36.103 [2024-11-21 00:06:26.479152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:36.103 [2024-11-21 00:06:26.479165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.103 [2024-11-21 00:06:26.479174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:36.103 [2024-11-21 00:06:26.479181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:36.103 [2024-11-21 00:06:26.479188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:36.103 [2024-11-21 00:06:26.479196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:36.103 [2024-11-21 00:06:26.479203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:36.103 [2024-11-21 00:06:26.479210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:36.103 [2024-11-21 00:06:26.479220] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:36.103 [2024-11-21 00:06:26.479231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:36.103 [2024-11-21 00:06:26.479241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:36.103 [2024-11-21 00:06:26.479252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:36.103 [2024-11-21 00:06:26.479259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:36.103 [2024-11-21 00:06:26.479267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:36.103 [2024-11-21 00:06:26.479275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:36.103 [2024-11-21 00:06:26.479283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:36.103 [2024-11-21 00:06:26.479291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:36.103 [2024-11-21 00:06:26.479333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:36.103 [2024-11-21 00:06:26.479340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:36.103 [2024-11-21 00:06:26.479348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:36.103 [2024-11-21 00:06:26.479355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:36.103 [2024-11-21 00:06:26.479363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:36.103 [2024-11-21 00:06:26.479370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:36.103 [2024-11-21 00:06:26.479378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:36.103 [2024-11-21 00:06:26.479387] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:36.103 [2024-11-21 00:06:26.479397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:36.103 [2024-11-21 00:06:26.479406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:36.103 [2024-11-21 00:06:26.479421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:36.103 [2024-11-21 00:06:26.479429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:36.103 [2024-11-21 00:06:26.479439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:36.103 [2024-11-21 00:06:26.479449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.103 [2024-11-21 00:06:26.479460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:36.103 [2024-11-21 00:06:26.479473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:17:36.103 [2024-11-21 00:06:26.479482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.103 [2024-11-21 00:06:26.507211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.103 [2024-11-21 00:06:26.507274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:36.103 [2024-11-21 00:06:26.507292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.656 ms 00:17:36.103 [2024-11-21 00:06:26.507327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.103 [2024-11-21 00:06:26.507522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.103 [2024-11-21 00:06:26.507542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:36.103 [2024-11-21 00:06:26.507555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:36.103 [2024-11-21 00:06:26.507570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.366 [2024-11-21 00:06:26.523430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.366 [2024-11-21 00:06:26.523476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:36.366 [2024-11-21 00:06:26.523488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.830 ms 00:17:36.366 [2024-11-21 00:06:26.523497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.366 [2024-11-21 00:06:26.523575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.366 [2024-11-21 00:06:26.523591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:36.366 [2024-11-21 00:06:26.523605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:36.366 [2024-11-21 00:06:26.523614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.366 [2024-11-21 00:06:26.524319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.366 [2024-11-21 00:06:26.524357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:36.366 [2024-11-21 00:06:26.524369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:17:36.366 [2024-11-21 00:06:26.524378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.366 [2024-11-21 00:06:26.524557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.366 [2024-11-21 00:06:26.524571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:36.366 [2024-11-21 00:06:26.524580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:17:36.366 [2024-11-21 00:06:26.524593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.366 [2024-11-21 00:06:26.534781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.366 [2024-11-21 00:06:26.534832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:36.366 [2024-11-21 00:06:26.534845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.164 ms 00:17:36.366 [2024-11-21 00:06:26.534854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.366 [2024-11-21 00:06:26.539639] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:36.366 [2024-11-21 00:06:26.539694] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:36.366 [2024-11-21 00:06:26.539708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.366 [2024-11-21 00:06:26.539718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:36.366 [2024-11-21 00:06:26.539728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.731 ms 00:17:36.366 [2024-11-21 00:06:26.539736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.366 [2024-11-21 00:06:26.555984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.366 [2024-11-21 00:06:26.556030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:36.366 [2024-11-21 00:06:26.556043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.168 ms 00:17:36.366 [2024-11-21 00:06:26.556052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.366 [2024-11-21 00:06:26.558942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.366 [2024-11-21 00:06:26.558989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:36.366 [2024-11-21 00:06:26.558999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:17:36.366 [2024-11-21 00:06:26.559007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.366 [2024-11-21 00:06:26.561562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.366 [2024-11-21 00:06:26.561605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:36.366 [2024-11-21 00:06:26.561625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.500 ms 00:17:36.367 [2024-11-21 00:06:26.561634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.367 [2024-11-21 00:06:26.561981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.367 [2024-11-21 00:06:26.562005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:36.367 [2024-11-21 00:06:26.562018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:36.367 [2024-11-21 00:06:26.562027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.367 [2024-11-21 00:06:26.592666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.367 [2024-11-21 00:06:26.592720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:36.367 [2024-11-21 00:06:26.592733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.608 ms 00:17:36.367 [2024-11-21 00:06:26.592742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.367 [2024-11-21 00:06:26.601758] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:36.367 [2024-11-21 00:06:26.625977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.367 [2024-11-21 00:06:26.626029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:36.367 [2024-11-21 00:06:26.626042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.144 ms 00:17:36.367 [2024-11-21 00:06:26.626052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.367 [2024-11-21 00:06:26.626164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.367 [2024-11-21 00:06:26.626177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:36.367 [2024-11-21 00:06:26.626188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:36.367 [2024-11-21 00:06:26.626207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.367 [2024-11-21 00:06:26.626280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.367 [2024-11-21 00:06:26.626320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:36.367 [2024-11-21 00:06:26.626331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:36.367 [2024-11-21 00:06:26.626339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.367 [2024-11-21 00:06:26.626370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.367 [2024-11-21 00:06:26.626380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:36.367 [2024-11-21 00:06:26.626389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:36.367 [2024-11-21 00:06:26.626398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.367 [2024-11-21 00:06:26.626440] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:36.367 [2024-11-21 00:06:26.626453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.367 [2024-11-21 00:06:26.626466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:36.367 [2024-11-21 00:06:26.626475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:36.367 [2024-11-21 00:06:26.626483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.367 [2024-11-21 00:06:26.633325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.367 [2024-11-21 00:06:26.633374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:36.367 [2024-11-21 00:06:26.633386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.817 ms 00:17:36.367 [2024-11-21 00:06:26.633405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.367 [2024-11-21 00:06:26.633508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.367 [2024-11-21 00:06:26.633524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:36.367 [2024-11-21 00:06:26.633536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:36.367 [2024-11-21 00:06:26.633550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.367 [2024-11-21 00:06:26.635227] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:36.367 [2024-11-21 00:06:26.636659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 181.922 ms, result 0 00:17:36.367 [2024-11-21 00:06:26.638264] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:36.367 [2024-11-21 00:06:26.645426] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:37.310  [2024-11-21T00:06:29.112Z] Copying: 13/256 [MB] (13 MBps) [2024-11-21T00:06:30.042Z] Copying: 25/256 [MB] (11 MBps) [2024-11-21T00:06:30.981Z] Copying: 36/256 [MB] (11 MBps) [2024-11-21T00:06:31.915Z] Copying: 47/256 [MB] (11 MBps) [2024-11-21T00:06:32.868Z] Copying: 61/256 [MB] (14 MBps) [2024-11-21T00:06:33.803Z] Copying: 73/256 [MB] (11 MBps) [2024-11-21T00:06:34.783Z] Copying: 87/256 [MB] (14 MBps) [2024-11-21T00:06:35.739Z] Copying: 101/256 [MB] (13 MBps) [2024-11-21T00:06:37.115Z] Copying: 113/256 [MB] (12 MBps) [2024-11-21T00:06:38.051Z] Copying: 125/256 [MB] (12 MBps) [2024-11-21T00:06:38.986Z] Copying: 137/256 [MB] (11 MBps) [2024-11-21T00:06:39.919Z] Copying: 149/256 [MB] (12 MBps) [2024-11-21T00:06:40.854Z] Copying: 161/256 [MB] (12 MBps) [2024-11-21T00:06:41.788Z] Copying: 173/256 [MB] (11 MBps) [2024-11-21T00:06:42.722Z] Copying: 184/256 [MB] (11 MBps) [2024-11-21T00:06:44.098Z] Copying: 196/256 [MB] (11 MBps) [2024-11-21T00:06:45.033Z] Copying: 208/256 [MB] (11 MBps) [2024-11-21T00:06:45.967Z] Copying: 221/256 [MB] (13 MBps) [2024-11-21T00:06:46.904Z] Copying: 234/256 [MB] (12 MBps) [2024-11-21T00:06:47.846Z] Copying: 246/256 [MB] (11 MBps) [2024-11-21T00:06:47.846Z] Copying: 256/256 [MB] (average 12 MBps)[2024-11-21 00:06:47.835800] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:57.425 [2024-11-21 00:06:47.837863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.425 [2024-11-21 00:06:47.837898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:57.425 [2024-11-21 00:06:47.837913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:57.425 [2024-11-21 00:06:47.837925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.425 [2024-11-21 00:06:47.837948] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:57.425 [2024-11-21 00:06:47.838545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.425 [2024-11-21 00:06:47.838567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:57.425 [2024-11-21 00:06:47.838577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:17:57.425 [2024-11-21 00:06:47.838585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.425 [2024-11-21 00:06:47.838849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.425 [2024-11-21 00:06:47.838867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:57.425 [2024-11-21 00:06:47.838876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:17:57.425 [2024-11-21 00:06:47.838891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.425 [2024-11-21 00:06:47.843009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.425 [2024-11-21 00:06:47.843035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:57.425 [2024-11-21 00:06:47.843046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.100 ms 00:17:57.425 [2024-11-21 00:06:47.843054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.687 [2024-11-21 00:06:47.849984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.687 [2024-11-21 00:06:47.850009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:57.687 [2024-11-21 00:06:47.850017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.912 ms 00:17:57.687 [2024-11-21 00:06:47.850024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.687 [2024-11-21 00:06:47.852455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.687 [2024-11-21 00:06:47.852488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:57.687 [2024-11-21 00:06:47.852497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.379 ms 00:17:57.687 [2024-11-21 00:06:47.852503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.687 [2024-11-21 00:06:47.855742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.687 [2024-11-21 00:06:47.855771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:57.687 [2024-11-21 00:06:47.855784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.208 ms 00:17:57.687 [2024-11-21 00:06:47.855790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.687 [2024-11-21 00:06:47.855885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.687 [2024-11-21 00:06:47.855893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:57.687 [2024-11-21 00:06:47.855899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:57.687 [2024-11-21 00:06:47.855905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.687 [2024-11-21 00:06:47.857963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.687 [2024-11-21 00:06:47.857989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:57.687 [2024-11-21 00:06:47.857995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.045 ms 00:17:57.687 [2024-11-21 00:06:47.858001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.687 [2024-11-21 00:06:47.859974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.687 [2024-11-21 00:06:47.860000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:57.687 [2024-11-21 00:06:47.860007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.946 ms 00:17:57.687 [2024-11-21 00:06:47.860013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.687 [2024-11-21 00:06:47.861886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.687 [2024-11-21 00:06:47.861913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:57.687 [2024-11-21 00:06:47.861919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.850 ms 00:17:57.687 [2024-11-21 00:06:47.861926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.687 [2024-11-21 00:06:47.863414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.687 [2024-11-21 00:06:47.863438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:57.687 [2024-11-21 00:06:47.863446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:17:57.687 [2024-11-21 00:06:47.863452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.687 [2024-11-21 00:06:47.863477] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:57.687 [2024-11-21 00:06:47.863499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:57.687 [2024-11-21 00:06:47.863836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.863996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:57.688 [2024-11-21 00:06:47.864104] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:57.688 [2024-11-21 00:06:47.864111] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2907dcd9-a39c-45f6-a091-f2f3bc6dbd9e 00:17:57.688 [2024-11-21 00:06:47.864117] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:57.688 [2024-11-21 00:06:47.864123] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:57.688 [2024-11-21 00:06:47.864129] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:57.688 [2024-11-21 00:06:47.864135] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:57.688 [2024-11-21 00:06:47.864140] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:57.688 [2024-11-21 00:06:47.864147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:57.688 [2024-11-21 00:06:47.864153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:57.688 [2024-11-21 00:06:47.864158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:57.688 [2024-11-21 00:06:47.864164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:57.688 [2024-11-21 00:06:47.864169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.688 [2024-11-21 00:06:47.864175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:57.688 [2024-11-21 00:06:47.864187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:17:57.688 [2024-11-21 00:06:47.864192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.688 [2024-11-21 00:06:47.865921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.688 [2024-11-21 00:06:47.865942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:57.688 [2024-11-21 00:06:47.865950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:17:57.688 [2024-11-21 00:06:47.865957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.688 [2024-11-21 00:06:47.866044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.688 [2024-11-21 00:06:47.866056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:57.688 [2024-11-21 00:06:47.866063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:57.688 [2024-11-21 00:06:47.866068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.688 [2024-11-21 00:06:47.871398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.688 [2024-11-21 00:06:47.871424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.688 [2024-11-21 00:06:47.871432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.688 [2024-11-21 00:06:47.871438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.688 [2024-11-21 00:06:47.871496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.688 [2024-11-21 00:06:47.871506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.688 [2024-11-21 00:06:47.871513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.688 [2024-11-21 00:06:47.871519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.688 [2024-11-21 00:06:47.871551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.688 [2024-11-21 00:06:47.871559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.688 [2024-11-21 00:06:47.871566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.688 [2024-11-21 00:06:47.871573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.688 [2024-11-21 00:06:47.871588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.688 [2024-11-21 00:06:47.871595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.688 [2024-11-21 00:06:47.871603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.688 [2024-11-21 00:06:47.871609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.688 [2024-11-21 00:06:47.882077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.688 [2024-11-21 00:06:47.882108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.688 [2024-11-21 00:06:47.882117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.688 [2024-11-21 00:06:47.882123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.688 [2024-11-21 00:06:47.890730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.688 [2024-11-21 00:06:47.890767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.688 [2024-11-21 00:06:47.890776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.688 [2024-11-21 00:06:47.890783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.688 [2024-11-21 00:06:47.890809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.688 [2024-11-21 00:06:47.890816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.688 [2024-11-21 00:06:47.890822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.688 [2024-11-21 00:06:47.890833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.688 [2024-11-21 00:06:47.890859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.688 [2024-11-21 00:06:47.890866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.688 [2024-11-21 00:06:47.890873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.688 [2024-11-21 00:06:47.890881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.688 [2024-11-21 00:06:47.890941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.688 [2024-11-21 00:06:47.890953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.688 [2024-11-21 00:06:47.890960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.689 [2024-11-21 00:06:47.890967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.689 [2024-11-21 00:06:47.890993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.689 [2024-11-21 00:06:47.891001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:57.689 [2024-11-21 00:06:47.891008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.689 [2024-11-21 00:06:47.891018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.689 [2024-11-21 00:06:47.891064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.689 [2024-11-21 00:06:47.891071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.689 [2024-11-21 00:06:47.891077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.689 [2024-11-21 00:06:47.891084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.689 [2024-11-21 00:06:47.891131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.689 [2024-11-21 00:06:47.891140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.689 [2024-11-21 00:06:47.891147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.689 [2024-11-21 00:06:47.891155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.689 [2024-11-21 00:06:47.891282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.400 ms, result 0 00:17:57.689 00:17:57.689 00:17:57.689 00:06:48 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:58.260 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:58.260 00:06:48 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:58.260 00:06:48 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:58.260 00:06:48 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:58.260 00:06:48 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:58.260 00:06:48 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:58.520 00:06:48 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:58.520 00:06:48 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85668 00:17:58.520 Process with pid 85668 is not found 00:17:58.520 00:06:48 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85668 ']' 00:17:58.520 00:06:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85668 00:17:58.520 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85668) - No such process 00:17:58.520 00:06:48 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 85668 is not found' 00:17:58.520 00:17:58.520 real 1m22.893s 00:17:58.520 user 1m45.452s 00:17:58.520 sys 0m5.756s 00:17:58.520 00:06:48 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:58.520 00:06:48 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:58.520 ************************************ 00:17:58.520 END TEST ftl_trim 00:17:58.520 ************************************ 00:17:58.520 00:06:48 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:58.520 00:06:48 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:58.520 00:06:48 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:58.520 00:06:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:58.520 ************************************ 00:17:58.520 START TEST ftl_restore 00:17:58.520 ************************************ 00:17:58.520 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:58.520 * Looking for test storage... 00:17:58.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.520 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:58.520 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:17:58.520 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:58.781 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:58.781 00:06:48 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.781 00:06:48 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.781 00:06:48 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.781 00:06:48 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.781 00:06:48 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.782 00:06:48 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:58.782 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.782 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:58.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.782 --rc genhtml_branch_coverage=1 00:17:58.782 --rc genhtml_function_coverage=1 00:17:58.782 --rc genhtml_legend=1 00:17:58.782 --rc geninfo_all_blocks=1 00:17:58.782 --rc geninfo_unexecuted_blocks=1 00:17:58.782 00:17:58.782 ' 00:17:58.782 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:58.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.782 --rc genhtml_branch_coverage=1 00:17:58.782 --rc genhtml_function_coverage=1 00:17:58.782 --rc genhtml_legend=1 00:17:58.782 --rc geninfo_all_blocks=1 00:17:58.782 --rc geninfo_unexecuted_blocks=1 00:17:58.782 00:17:58.782 ' 00:17:58.782 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:58.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.782 --rc genhtml_branch_coverage=1 00:17:58.782 --rc genhtml_function_coverage=1 00:17:58.782 --rc genhtml_legend=1 00:17:58.782 --rc geninfo_all_blocks=1 00:17:58.782 --rc geninfo_unexecuted_blocks=1 00:17:58.782 00:17:58.782 ' 00:17:58.782 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:58.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.782 --rc genhtml_branch_coverage=1 00:17:58.782 --rc genhtml_function_coverage=1 00:17:58.782 --rc genhtml_legend=1 00:17:58.782 --rc geninfo_all_blocks=1 00:17:58.782 --rc geninfo_unexecuted_blocks=1 00:17:58.782 00:17:58.782 ' 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.bcCxe218UE 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86032 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86032 00:17:58.782 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 86032 ']' 00:17:58.782 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.782 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.782 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.782 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.782 00:06:48 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:58.782 00:06:48 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.782 [2024-11-21 00:06:49.081859] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:58.782 [2024-11-21 00:06:49.082015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86032 ] 00:17:59.042 [2024-11-21 00:06:49.215537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.042 [2024-11-21 00:06:49.288918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.613 00:06:49 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.613 00:06:49 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:59.613 00:06:49 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:59.613 00:06:49 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:59.613 00:06:49 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:59.613 00:06:49 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:59.613 00:06:49 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:59.613 00:06:49 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:59.875 00:06:50 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:59.875 00:06:50 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:59.875 00:06:50 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:59.875 00:06:50 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:59.875 00:06:50 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:59.875 00:06:50 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:59.875 00:06:50 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:59.875 00:06:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:00.134 00:06:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:00.134 { 00:18:00.134 "name": "nvme0n1", 00:18:00.134 "aliases": [ 00:18:00.134 "37784df0-208f-47f5-8652-ce9bc4e7ca4b" 00:18:00.134 ], 00:18:00.134 "product_name": "NVMe disk", 00:18:00.134 "block_size": 4096, 00:18:00.134 "num_blocks": 1310720, 00:18:00.134 "uuid": "37784df0-208f-47f5-8652-ce9bc4e7ca4b", 00:18:00.134 "numa_id": -1, 00:18:00.134 "assigned_rate_limits": { 00:18:00.134 "rw_ios_per_sec": 0, 00:18:00.134 "rw_mbytes_per_sec": 0, 00:18:00.134 "r_mbytes_per_sec": 0, 00:18:00.134 "w_mbytes_per_sec": 0 00:18:00.134 }, 00:18:00.134 "claimed": true, 00:18:00.134 "claim_type": "read_many_write_one", 00:18:00.134 "zoned": false, 00:18:00.134 "supported_io_types": { 00:18:00.134 "read": true, 00:18:00.134 "write": true, 00:18:00.134 "unmap": true, 00:18:00.134 "flush": true, 00:18:00.134 "reset": true, 00:18:00.134 "nvme_admin": true, 00:18:00.134 "nvme_io": true, 00:18:00.134 "nvme_io_md": false, 00:18:00.134 "write_zeroes": true, 00:18:00.134 "zcopy": false, 00:18:00.134 "get_zone_info": false, 00:18:00.134 "zone_management": false, 00:18:00.134 "zone_append": false, 00:18:00.134 "compare": true, 00:18:00.134 "compare_and_write": false, 00:18:00.134 "abort": true, 00:18:00.134 "seek_hole": false, 00:18:00.134 "seek_data": false, 00:18:00.134 "copy": true, 00:18:00.134 "nvme_iov_md": false 00:18:00.134 }, 00:18:00.134 "driver_specific": { 00:18:00.134 "nvme": [ 00:18:00.134 { 00:18:00.134 "pci_address": "0000:00:11.0", 00:18:00.134 "trid": { 00:18:00.134 "trtype": "PCIe", 00:18:00.134 "traddr": "0000:00:11.0" 00:18:00.134 }, 00:18:00.134 "ctrlr_data": { 00:18:00.134 "cntlid": 0, 00:18:00.134 "vendor_id": "0x1b36", 00:18:00.134 "model_number": "QEMU NVMe Ctrl", 00:18:00.134 "serial_number": "12341", 00:18:00.134 "firmware_revision": "8.0.0", 00:18:00.134 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:00.134 "oacs": { 00:18:00.134 "security": 0, 00:18:00.134 "format": 1, 00:18:00.134 "firmware": 0, 00:18:00.134 "ns_manage": 1 00:18:00.134 }, 00:18:00.134 "multi_ctrlr": false, 00:18:00.134 "ana_reporting": false 00:18:00.134 }, 00:18:00.134 "vs": { 00:18:00.134 "nvme_version": "1.4" 00:18:00.134 }, 00:18:00.134 "ns_data": { 00:18:00.134 "id": 1, 00:18:00.134 "can_share": false 00:18:00.134 } 00:18:00.134 } 00:18:00.134 ], 00:18:00.134 "mp_policy": "active_passive" 00:18:00.134 } 00:18:00.134 } 00:18:00.134 ]' 00:18:00.134 00:06:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:00.134 00:06:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:00.134 00:06:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:00.134 00:06:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:00.134 00:06:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:00.134 00:06:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:00.134 00:06:50 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:00.134 00:06:50 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:00.134 00:06:50 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:00.134 00:06:50 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:00.135 00:06:50 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:00.393 00:06:50 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=edb36aba-1296-43a4-b554-5e0360e2821c 00:18:00.393 00:06:50 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:00.393 00:06:50 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u edb36aba-1296-43a4-b554-5e0360e2821c 00:18:00.651 00:06:50 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:00.909 00:06:51 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=f687c260-e989-4475-9d6d-04abf91b28e2 00:18:00.909 00:06:51 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f687c260-e989-4475-9d6d-04abf91b28e2 00:18:01.167 00:06:51 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=f802681f-2f79-47bb-bd63-68618a19aa36 00:18:01.167 00:06:51 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:01.167 00:06:51 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f802681f-2f79-47bb-bd63-68618a19aa36 00:18:01.167 00:06:51 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:01.167 00:06:51 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:01.167 00:06:51 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=f802681f-2f79-47bb-bd63-68618a19aa36 00:18:01.167 00:06:51 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:01.167 00:06:51 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size f802681f-2f79-47bb-bd63-68618a19aa36 00:18:01.167 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=f802681f-2f79-47bb-bd63-68618a19aa36 00:18:01.167 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:01.167 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:01.167 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:01.167 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f802681f-2f79-47bb-bd63-68618a19aa36 00:18:01.167 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:01.167 { 00:18:01.167 "name": "f802681f-2f79-47bb-bd63-68618a19aa36", 00:18:01.167 "aliases": [ 00:18:01.167 "lvs/nvme0n1p0" 00:18:01.167 ], 00:18:01.167 "product_name": "Logical Volume", 00:18:01.167 "block_size": 4096, 00:18:01.167 "num_blocks": 26476544, 00:18:01.167 "uuid": "f802681f-2f79-47bb-bd63-68618a19aa36", 00:18:01.167 "assigned_rate_limits": { 00:18:01.167 "rw_ios_per_sec": 0, 00:18:01.167 "rw_mbytes_per_sec": 0, 00:18:01.167 "r_mbytes_per_sec": 0, 00:18:01.167 "w_mbytes_per_sec": 0 00:18:01.167 }, 00:18:01.167 "claimed": false, 00:18:01.167 "zoned": false, 00:18:01.167 "supported_io_types": { 00:18:01.167 "read": true, 00:18:01.167 "write": true, 00:18:01.167 "unmap": true, 00:18:01.167 "flush": false, 00:18:01.167 "reset": true, 00:18:01.167 "nvme_admin": false, 00:18:01.167 "nvme_io": false, 00:18:01.167 "nvme_io_md": false, 00:18:01.167 "write_zeroes": true, 00:18:01.167 "zcopy": false, 00:18:01.167 "get_zone_info": false, 00:18:01.167 "zone_management": false, 00:18:01.167 "zone_append": false, 00:18:01.167 "compare": false, 00:18:01.167 "compare_and_write": false, 00:18:01.167 "abort": false, 00:18:01.167 "seek_hole": true, 00:18:01.167 "seek_data": true, 00:18:01.167 "copy": false, 00:18:01.167 "nvme_iov_md": false 00:18:01.167 }, 00:18:01.167 "driver_specific": { 00:18:01.167 "lvol": { 00:18:01.167 "lvol_store_uuid": "f687c260-e989-4475-9d6d-04abf91b28e2", 00:18:01.167 "base_bdev": "nvme0n1", 00:18:01.167 "thin_provision": true, 00:18:01.167 "num_allocated_clusters": 0, 00:18:01.167 "snapshot": false, 00:18:01.167 "clone": false, 00:18:01.167 "esnap_clone": false 00:18:01.167 } 00:18:01.167 } 00:18:01.167 } 00:18:01.167 ]' 00:18:01.167 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:01.167 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:01.167 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:01.425 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:01.425 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:01.425 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:01.425 00:06:51 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:01.425 00:06:51 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:01.425 00:06:51 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:01.684 00:06:51 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:01.684 00:06:51 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:01.684 00:06:51 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size f802681f-2f79-47bb-bd63-68618a19aa36 00:18:01.684 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=f802681f-2f79-47bb-bd63-68618a19aa36 00:18:01.684 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:01.684 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:01.684 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:01.684 00:06:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f802681f-2f79-47bb-bd63-68618a19aa36 00:18:01.684 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:01.684 { 00:18:01.684 "name": "f802681f-2f79-47bb-bd63-68618a19aa36", 00:18:01.684 "aliases": [ 00:18:01.684 "lvs/nvme0n1p0" 00:18:01.684 ], 00:18:01.684 "product_name": "Logical Volume", 00:18:01.684 "block_size": 4096, 00:18:01.684 "num_blocks": 26476544, 00:18:01.684 "uuid": "f802681f-2f79-47bb-bd63-68618a19aa36", 00:18:01.684 "assigned_rate_limits": { 00:18:01.684 "rw_ios_per_sec": 0, 00:18:01.684 "rw_mbytes_per_sec": 0, 00:18:01.684 "r_mbytes_per_sec": 0, 00:18:01.684 "w_mbytes_per_sec": 0 00:18:01.684 }, 00:18:01.684 "claimed": false, 00:18:01.684 "zoned": false, 00:18:01.684 "supported_io_types": { 00:18:01.684 "read": true, 00:18:01.684 "write": true, 00:18:01.684 "unmap": true, 00:18:01.684 "flush": false, 00:18:01.684 "reset": true, 00:18:01.684 "nvme_admin": false, 00:18:01.684 "nvme_io": false, 00:18:01.684 "nvme_io_md": false, 00:18:01.684 "write_zeroes": true, 00:18:01.684 "zcopy": false, 00:18:01.684 "get_zone_info": false, 00:18:01.684 "zone_management": false, 00:18:01.684 "zone_append": false, 00:18:01.684 "compare": false, 00:18:01.684 "compare_and_write": false, 00:18:01.684 "abort": false, 00:18:01.684 "seek_hole": true, 00:18:01.684 "seek_data": true, 00:18:01.684 "copy": false, 00:18:01.684 "nvme_iov_md": false 00:18:01.684 }, 00:18:01.684 "driver_specific": { 00:18:01.684 "lvol": { 00:18:01.684 "lvol_store_uuid": "f687c260-e989-4475-9d6d-04abf91b28e2", 00:18:01.684 "base_bdev": "nvme0n1", 00:18:01.684 "thin_provision": true, 00:18:01.684 "num_allocated_clusters": 0, 00:18:01.684 "snapshot": false, 00:18:01.684 "clone": false, 00:18:01.684 "esnap_clone": false 00:18:01.684 } 00:18:01.684 } 00:18:01.684 } 00:18:01.684 ]' 00:18:01.684 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:01.684 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:01.684 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:01.943 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:01.943 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:01.943 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:01.943 00:06:52 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:01.943 00:06:52 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:01.943 00:06:52 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:01.943 00:06:52 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size f802681f-2f79-47bb-bd63-68618a19aa36 00:18:01.943 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=f802681f-2f79-47bb-bd63-68618a19aa36 00:18:01.943 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:01.943 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:01.943 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:01.943 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f802681f-2f79-47bb-bd63-68618a19aa36 00:18:02.201 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:02.201 { 00:18:02.201 "name": "f802681f-2f79-47bb-bd63-68618a19aa36", 00:18:02.201 "aliases": [ 00:18:02.201 "lvs/nvme0n1p0" 00:18:02.201 ], 00:18:02.201 "product_name": "Logical Volume", 00:18:02.201 "block_size": 4096, 00:18:02.201 "num_blocks": 26476544, 00:18:02.201 "uuid": "f802681f-2f79-47bb-bd63-68618a19aa36", 00:18:02.201 "assigned_rate_limits": { 00:18:02.201 "rw_ios_per_sec": 0, 00:18:02.201 "rw_mbytes_per_sec": 0, 00:18:02.201 "r_mbytes_per_sec": 0, 00:18:02.201 "w_mbytes_per_sec": 0 00:18:02.201 }, 00:18:02.201 "claimed": false, 00:18:02.201 "zoned": false, 00:18:02.201 "supported_io_types": { 00:18:02.201 "read": true, 00:18:02.201 "write": true, 00:18:02.201 "unmap": true, 00:18:02.201 "flush": false, 00:18:02.201 "reset": true, 00:18:02.201 "nvme_admin": false, 00:18:02.201 "nvme_io": false, 00:18:02.201 "nvme_io_md": false, 00:18:02.201 "write_zeroes": true, 00:18:02.201 "zcopy": false, 00:18:02.201 "get_zone_info": false, 00:18:02.201 "zone_management": false, 00:18:02.201 "zone_append": false, 00:18:02.201 "compare": false, 00:18:02.201 "compare_and_write": false, 00:18:02.201 "abort": false, 00:18:02.201 "seek_hole": true, 00:18:02.201 "seek_data": true, 00:18:02.201 "copy": false, 00:18:02.201 "nvme_iov_md": false 00:18:02.201 }, 00:18:02.201 "driver_specific": { 00:18:02.201 "lvol": { 00:18:02.201 "lvol_store_uuid": "f687c260-e989-4475-9d6d-04abf91b28e2", 00:18:02.201 "base_bdev": "nvme0n1", 00:18:02.201 "thin_provision": true, 00:18:02.201 "num_allocated_clusters": 0, 00:18:02.201 "snapshot": false, 00:18:02.201 "clone": false, 00:18:02.201 "esnap_clone": false 00:18:02.201 } 00:18:02.201 } 00:18:02.201 } 00:18:02.202 ]' 00:18:02.202 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:02.202 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:02.202 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:02.202 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:02.202 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:02.202 00:06:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:02.202 00:06:52 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:02.202 00:06:52 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f802681f-2f79-47bb-bd63-68618a19aa36 --l2p_dram_limit 10' 00:18:02.202 00:06:52 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:02.202 00:06:52 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:02.202 00:06:52 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:02.202 00:06:52 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:02.202 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:02.202 00:06:52 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f802681f-2f79-47bb-bd63-68618a19aa36 --l2p_dram_limit 10 -c nvc0n1p0 00:18:02.463 [2024-11-21 00:06:52.760305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.463 [2024-11-21 00:06:52.760348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:02.463 [2024-11-21 00:06:52.760360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:02.463 [2024-11-21 00:06:52.760368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.463 [2024-11-21 00:06:52.760406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.463 [2024-11-21 00:06:52.760416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:02.463 [2024-11-21 00:06:52.760422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:02.463 [2024-11-21 00:06:52.760435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.463 [2024-11-21 00:06:52.760452] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:02.463 [2024-11-21 00:06:52.760646] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:02.463 [2024-11-21 00:06:52.760659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.463 [2024-11-21 00:06:52.760670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:02.463 [2024-11-21 00:06:52.760678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:18:02.463 [2024-11-21 00:06:52.760686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.463 [2024-11-21 00:06:52.760709] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5652dacb-6cf0-4515-b30c-e9fc0a790556 00:18:02.463 [2024-11-21 00:06:52.761992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.463 [2024-11-21 00:06:52.762017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:02.463 [2024-11-21 00:06:52.762026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:02.463 [2024-11-21 00:06:52.762033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.463 [2024-11-21 00:06:52.768979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.463 [2024-11-21 00:06:52.769006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:02.463 [2024-11-21 00:06:52.769015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.877 ms 00:18:02.463 [2024-11-21 00:06:52.769021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.463 [2024-11-21 00:06:52.769086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.463 [2024-11-21 00:06:52.769093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:02.463 [2024-11-21 00:06:52.769101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:02.463 [2024-11-21 00:06:52.769109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.463 [2024-11-21 00:06:52.769147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.463 [2024-11-21 00:06:52.769158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:02.463 [2024-11-21 00:06:52.769166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:02.463 [2024-11-21 00:06:52.769172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.463 [2024-11-21 00:06:52.769191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:02.463 [2024-11-21 00:06:52.770854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.463 [2024-11-21 00:06:52.770879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:02.463 [2024-11-21 00:06:52.770889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 00:18:02.463 [2024-11-21 00:06:52.770896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.463 [2024-11-21 00:06:52.770924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.463 [2024-11-21 00:06:52.770935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:02.463 [2024-11-21 00:06:52.770941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:02.463 [2024-11-21 00:06:52.770950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.463 [2024-11-21 00:06:52.770963] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:02.463 [2024-11-21 00:06:52.771078] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:02.463 [2024-11-21 00:06:52.771088] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:02.463 [2024-11-21 00:06:52.771098] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:02.464 [2024-11-21 00:06:52.771106] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:02.464 [2024-11-21 00:06:52.771116] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:02.464 [2024-11-21 00:06:52.771123] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:02.464 [2024-11-21 00:06:52.771134] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:02.464 [2024-11-21 00:06:52.771140] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:02.464 [2024-11-21 00:06:52.771147] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:02.464 [2024-11-21 00:06:52.771155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.464 [2024-11-21 00:06:52.771162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:02.464 [2024-11-21 00:06:52.771169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:18:02.464 [2024-11-21 00:06:52.771177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.464 [2024-11-21 00:06:52.771242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.464 [2024-11-21 00:06:52.771252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:02.464 [2024-11-21 00:06:52.771258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:02.464 [2024-11-21 00:06:52.771266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.464 [2024-11-21 00:06:52.771493] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:02.464 [2024-11-21 00:06:52.771531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:02.464 [2024-11-21 00:06:52.771550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:02.464 [2024-11-21 00:06:52.771604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.464 [2024-11-21 00:06:52.771624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:02.464 [2024-11-21 00:06:52.771640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:02.464 [2024-11-21 00:06:52.771677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:02.464 [2024-11-21 00:06:52.771696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:02.464 [2024-11-21 00:06:52.771711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:02.464 [2024-11-21 00:06:52.771753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:02.464 [2024-11-21 00:06:52.771770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:02.464 [2024-11-21 00:06:52.771788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:02.464 [2024-11-21 00:06:52.771822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:02.464 [2024-11-21 00:06:52.771843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:02.464 [2024-11-21 00:06:52.771858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:02.464 [2024-11-21 00:06:52.771875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.464 [2024-11-21 00:06:52.771988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:02.464 [2024-11-21 00:06:52.771999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:02.464 [2024-11-21 00:06:52.772005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.464 [2024-11-21 00:06:52.772012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:02.464 [2024-11-21 00:06:52.772018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:02.464 [2024-11-21 00:06:52.772026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.464 [2024-11-21 00:06:52.772032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:02.464 [2024-11-21 00:06:52.772039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:02.464 [2024-11-21 00:06:52.772044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.464 [2024-11-21 00:06:52.772051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:02.464 [2024-11-21 00:06:52.772057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:02.464 [2024-11-21 00:06:52.772067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.464 [2024-11-21 00:06:52.772072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:02.464 [2024-11-21 00:06:52.772080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:02.464 [2024-11-21 00:06:52.772085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.464 [2024-11-21 00:06:52.772092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:02.464 [2024-11-21 00:06:52.772097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:02.464 [2024-11-21 00:06:52.772104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:02.464 [2024-11-21 00:06:52.772109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:02.464 [2024-11-21 00:06:52.772117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:02.464 [2024-11-21 00:06:52.772122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:02.464 [2024-11-21 00:06:52.772129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:02.464 [2024-11-21 00:06:52.772134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:02.464 [2024-11-21 00:06:52.772141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.464 [2024-11-21 00:06:52.772146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:02.464 [2024-11-21 00:06:52.772153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:02.464 [2024-11-21 00:06:52.772158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.464 [2024-11-21 00:06:52.772164] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:02.464 [2024-11-21 00:06:52.772175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:02.464 [2024-11-21 00:06:52.772184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:02.464 [2024-11-21 00:06:52.772191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.464 [2024-11-21 00:06:52.772199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:02.464 [2024-11-21 00:06:52.772204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:02.464 [2024-11-21 00:06:52.772210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:02.464 [2024-11-21 00:06:52.772216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:02.464 [2024-11-21 00:06:52.772222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:02.464 [2024-11-21 00:06:52.772227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:02.464 [2024-11-21 00:06:52.772241] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:02.464 [2024-11-21 00:06:52.772250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:02.464 [2024-11-21 00:06:52.772259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:02.464 [2024-11-21 00:06:52.772265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:02.464 [2024-11-21 00:06:52.772272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:02.464 [2024-11-21 00:06:52.772278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:02.464 [2024-11-21 00:06:52.772286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:02.464 [2024-11-21 00:06:52.772292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:02.464 [2024-11-21 00:06:52.772316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:02.464 [2024-11-21 00:06:52.772322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:02.464 [2024-11-21 00:06:52.772329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:02.464 [2024-11-21 00:06:52.772336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:02.464 [2024-11-21 00:06:52.772343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:02.464 [2024-11-21 00:06:52.772349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:02.464 [2024-11-21 00:06:52.772356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:02.464 [2024-11-21 00:06:52.772362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:02.464 [2024-11-21 00:06:52.772370] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:02.464 [2024-11-21 00:06:52.772378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:02.464 [2024-11-21 00:06:52.772386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:02.464 [2024-11-21 00:06:52.772392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:02.464 [2024-11-21 00:06:52.772399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:02.464 [2024-11-21 00:06:52.772405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:02.464 [2024-11-21 00:06:52.772413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.464 [2024-11-21 00:06:52.772419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:02.464 [2024-11-21 00:06:52.772429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:18:02.464 [2024-11-21 00:06:52.772434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.464 [2024-11-21 00:06:52.772477] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:02.464 [2024-11-21 00:06:52.772486] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:06.670 [2024-11-21 00:06:56.615470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.670 [2024-11-21 00:06:56.615836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:06.670 [2024-11-21 00:06:56.616341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3842.969 ms 00:18:06.670 [2024-11-21 00:06:56.616397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.670 [2024-11-21 00:06:56.635618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.670 [2024-11-21 00:06:56.635797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:06.670 [2024-11-21 00:06:56.635880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.011 ms 00:18:06.670 [2024-11-21 00:06:56.635907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.670 [2024-11-21 00:06:56.636079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.670 [2024-11-21 00:06:56.636168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:06.670 [2024-11-21 00:06:56.636202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:06.670 [2024-11-21 00:06:56.636223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.670 [2024-11-21 00:06:56.652517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.670 [2024-11-21 00:06:56.652569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:06.670 [2024-11-21 00:06:56.652588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.817 ms 00:18:06.670 [2024-11-21 00:06:56.652598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.670 [2024-11-21 00:06:56.652635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.670 [2024-11-21 00:06:56.652645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.670 [2024-11-21 00:06:56.652662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:06.670 [2024-11-21 00:06:56.652677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.670 [2024-11-21 00:06:56.653447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.670 [2024-11-21 00:06:56.653479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.670 [2024-11-21 00:06:56.653495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:18:06.670 [2024-11-21 00:06:56.653505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.670 [2024-11-21 00:06:56.653643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.670 [2024-11-21 00:06:56.653653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.670 [2024-11-21 00:06:56.653665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:18:06.670 [2024-11-21 00:06:56.653676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.670 [2024-11-21 00:06:56.670427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.670 [2024-11-21 00:06:56.670650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.670 [2024-11-21 00:06:56.670681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.721 ms 00:18:06.670 [2024-11-21 00:06:56.670693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.670 [2024-11-21 00:06:56.682233] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:06.670 [2024-11-21 00:06:56.687341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.670 [2024-11-21 00:06:56.687387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:06.670 [2024-11-21 00:06:56.687400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.518 ms 00:18:06.670 [2024-11-21 00:06:56.687411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.670 [2024-11-21 00:06:56.775681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.671 [2024-11-21 00:06:56.775741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:06.671 [2024-11-21 00:06:56.775754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.235 ms 00:18:06.671 [2024-11-21 00:06:56.775770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.671 [2024-11-21 00:06:56.775991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.671 [2024-11-21 00:06:56.776007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:06.671 [2024-11-21 00:06:56.776017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:18:06.671 [2024-11-21 00:06:56.776035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.671 [2024-11-21 00:06:56.781841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.671 [2024-11-21 00:06:56.782026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:06.671 [2024-11-21 00:06:56.782047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.769 ms 00:18:06.671 [2024-11-21 00:06:56.782060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.671 [2024-11-21 00:06:56.787339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.671 [2024-11-21 00:06:56.787398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:06.671 [2024-11-21 00:06:56.787413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.234 ms 00:18:06.671 [2024-11-21 00:06:56.787424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.671 [2024-11-21 00:06:56.787793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.671 [2024-11-21 00:06:56.787809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:06.671 [2024-11-21 00:06:56.787821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:18:06.671 [2024-11-21 00:06:56.787834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.671 [2024-11-21 00:06:56.834593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.671 [2024-11-21 00:06:56.834654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:06.671 [2024-11-21 00:06:56.834668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.716 ms 00:18:06.671 [2024-11-21 00:06:56.834680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.671 [2024-11-21 00:06:56.842601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.671 [2024-11-21 00:06:56.842658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:06.671 [2024-11-21 00:06:56.842670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.841 ms 00:18:06.671 [2024-11-21 00:06:56.842682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.671 [2024-11-21 00:06:56.848176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.671 [2024-11-21 00:06:56.848229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:06.671 [2024-11-21 00:06:56.848239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.446 ms 00:18:06.671 [2024-11-21 00:06:56.848250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.671 [2024-11-21 00:06:56.854281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.671 [2024-11-21 00:06:56.854347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:06.671 [2024-11-21 00:06:56.854359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.961 ms 00:18:06.671 [2024-11-21 00:06:56.854373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.671 [2024-11-21 00:06:56.854427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.671 [2024-11-21 00:06:56.854443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:06.671 [2024-11-21 00:06:56.854456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:06.671 [2024-11-21 00:06:56.854472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.671 [2024-11-21 00:06:56.854556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.671 [2024-11-21 00:06:56.854574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:06.671 [2024-11-21 00:06:56.854583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:06.671 [2024-11-21 00:06:56.854595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.671 [2024-11-21 00:06:56.856106] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4095.097 ms, result 0 00:18:06.671 { 00:18:06.671 "name": "ftl0", 00:18:06.671 "uuid": "5652dacb-6cf0-4515-b30c-e9fc0a790556" 00:18:06.671 } 00:18:06.671 00:06:56 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:06.671 00:06:56 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:06.934 00:06:57 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:06.934 00:06:57 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:06.934 [2024-11-21 00:06:57.299685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.934 [2024-11-21 00:06:57.299894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:06.934 [2024-11-21 00:06:57.300324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:06.934 [2024-11-21 00:06:57.300353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.934 [2024-11-21 00:06:57.300414] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:06.934 [2024-11-21 00:06:57.301442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.934 [2024-11-21 00:06:57.301483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:06.934 [2024-11-21 00:06:57.301494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:18:06.934 [2024-11-21 00:06:57.301511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.934 [2024-11-21 00:06:57.301781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.934 [2024-11-21 00:06:57.301797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:06.934 [2024-11-21 00:06:57.301808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:18:06.934 [2024-11-21 00:06:57.301819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.934 [2024-11-21 00:06:57.305071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.934 [2024-11-21 00:06:57.305106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:06.934 [2024-11-21 00:06:57.305115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:18:06.934 [2024-11-21 00:06:57.305127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.934 [2024-11-21 00:06:57.311403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.934 [2024-11-21 00:06:57.311446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:06.934 [2024-11-21 00:06:57.311458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.258 ms 00:18:06.934 [2024-11-21 00:06:57.311470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.934 [2024-11-21 00:06:57.314684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.934 [2024-11-21 00:06:57.314891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:06.934 [2024-11-21 00:06:57.314910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.109 ms 00:18:06.934 [2024-11-21 00:06:57.314921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.934 [2024-11-21 00:06:57.322897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.934 [2024-11-21 00:06:57.323089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:06.934 [2024-11-21 00:06:57.323512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.688 ms 00:18:06.934 [2024-11-21 00:06:57.323553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.934 [2024-11-21 00:06:57.323712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.934 [2024-11-21 00:06:57.323731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:06.934 [2024-11-21 00:06:57.323742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:06.934 [2024-11-21 00:06:57.323754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.934 [2024-11-21 00:06:57.327079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.934 [2024-11-21 00:06:57.327136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:06.934 [2024-11-21 00:06:57.327147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.302 ms 00:18:06.934 [2024-11-21 00:06:57.327159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.934 [2024-11-21 00:06:57.330122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.934 [2024-11-21 00:06:57.330327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:06.934 [2024-11-21 00:06:57.330345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.916 ms 00:18:06.934 [2024-11-21 00:06:57.330356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.934 [2024-11-21 00:06:57.332913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.934 [2024-11-21 00:06:57.332983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:06.934 [2024-11-21 00:06:57.332996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:18:06.934 [2024-11-21 00:06:57.333007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.934 [2024-11-21 00:06:57.335283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.934 [2024-11-21 00:06:57.335351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:06.934 [2024-11-21 00:06:57.335371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:18:06.934 [2024-11-21 00:06:57.335382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.934 [2024-11-21 00:06:57.335426] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:06.934 [2024-11-21 00:06:57.335452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:06.934 [2024-11-21 00:06:57.335865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.335996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:06.935 [2024-11-21 00:06:57.336457] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:06.935 [2024-11-21 00:06:57.336466] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5652dacb-6cf0-4515-b30c-e9fc0a790556 00:18:06.935 [2024-11-21 00:06:57.336477] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:06.935 [2024-11-21 00:06:57.336486] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:06.935 [2024-11-21 00:06:57.336496] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:06.935 [2024-11-21 00:06:57.336505] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:06.935 [2024-11-21 00:06:57.336517] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:06.935 [2024-11-21 00:06:57.336528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:06.935 [2024-11-21 00:06:57.336541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:06.935 [2024-11-21 00:06:57.336547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:06.935 [2024-11-21 00:06:57.336556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:06.935 [2024-11-21 00:06:57.336563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.935 [2024-11-21 00:06:57.336575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:06.935 [2024-11-21 00:06:57.336587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:18:06.935 [2024-11-21 00:06:57.336597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.935 [2024-11-21 00:06:57.339199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.935 [2024-11-21 00:06:57.339239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:06.935 [2024-11-21 00:06:57.339251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.581 ms 00:18:06.935 [2024-11-21 00:06:57.339263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.935 [2024-11-21 00:06:57.339406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.935 [2024-11-21 00:06:57.339420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:06.935 [2024-11-21 00:06:57.339432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:18:06.935 [2024-11-21 00:06:57.339444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.935 [2024-11-21 00:06:57.350124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.935 [2024-11-21 00:06:57.350180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.935 [2024-11-21 00:06:57.350201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.935 [2024-11-21 00:06:57.350213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.935 [2024-11-21 00:06:57.350291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.935 [2024-11-21 00:06:57.350329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.935 [2024-11-21 00:06:57.350338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.935 [2024-11-21 00:06:57.350351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.935 [2024-11-21 00:06:57.350440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.935 [2024-11-21 00:06:57.350459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.935 [2024-11-21 00:06:57.350468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.935 [2024-11-21 00:06:57.350481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.935 [2024-11-21 00:06:57.350501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.935 [2024-11-21 00:06:57.350516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.935 [2024-11-21 00:06:57.350526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.935 [2024-11-21 00:06:57.350538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.197 [2024-11-21 00:06:57.369399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.197 [2024-11-21 00:06:57.369463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:07.197 [2024-11-21 00:06:57.369477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.197 [2024-11-21 00:06:57.369496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.197 [2024-11-21 00:06:57.385493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.197 [2024-11-21 00:06:57.385566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:07.197 [2024-11-21 00:06:57.385579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.197 [2024-11-21 00:06:57.385595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.197 [2024-11-21 00:06:57.385703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.197 [2024-11-21 00:06:57.385721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:07.197 [2024-11-21 00:06:57.385737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.197 [2024-11-21 00:06:57.385750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.197 [2024-11-21 00:06:57.385803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.197 [2024-11-21 00:06:57.385817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:07.197 [2024-11-21 00:06:57.385827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.197 [2024-11-21 00:06:57.385841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.197 [2024-11-21 00:06:57.385935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.197 [2024-11-21 00:06:57.385949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:07.197 [2024-11-21 00:06:57.385960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.197 [2024-11-21 00:06:57.385970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.197 [2024-11-21 00:06:57.386008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.197 [2024-11-21 00:06:57.386023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:07.197 [2024-11-21 00:06:57.386032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.197 [2024-11-21 00:06:57.386047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.197 [2024-11-21 00:06:57.386105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.197 [2024-11-21 00:06:57.386124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:07.197 [2024-11-21 00:06:57.386135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.197 [2024-11-21 00:06:57.386148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.197 [2024-11-21 00:06:57.386216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.197 [2024-11-21 00:06:57.386231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:07.197 [2024-11-21 00:06:57.386246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.197 [2024-11-21 00:06:57.386263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.197 [2024-11-21 00:06:57.386514] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 86.757 ms, result 0 00:18:07.197 true 00:18:07.197 00:06:57 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86032 00:18:07.197 00:06:57 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86032 ']' 00:18:07.197 00:06:57 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86032 00:18:07.197 00:06:57 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:07.197 00:06:57 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.197 00:06:57 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86032 00:18:07.197 killing process with pid 86032 00:18:07.197 00:06:57 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:07.197 00:06:57 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:07.197 00:06:57 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86032' 00:18:07.198 00:06:57 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 86032 00:18:07.198 00:06:57 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 86032 00:18:12.486 00:07:02 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:16.698 262144+0 records in 00:18:16.698 262144+0 records out 00:18:16.698 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.33714 s, 248 MB/s 00:18:16.698 00:07:06 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:18.643 00:07:08 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:18.643 [2024-11-21 00:07:08.972465] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:18.643 [2024-11-21 00:07:08.973139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86250 ] 00:18:18.902 [2024-11-21 00:07:09.104536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.902 [2024-11-21 00:07:09.147190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.902 [2024-11-21 00:07:09.245415] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:18.902 [2024-11-21 00:07:09.245624] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:19.164 [2024-11-21 00:07:09.399334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.164 [2024-11-21 00:07:09.399368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:19.164 [2024-11-21 00:07:09.399381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:19.164 [2024-11-21 00:07:09.399387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.164 [2024-11-21 00:07:09.399425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.164 [2024-11-21 00:07:09.399434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:19.164 [2024-11-21 00:07:09.399441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:19.164 [2024-11-21 00:07:09.399452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.164 [2024-11-21 00:07:09.399468] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:19.164 [2024-11-21 00:07:09.399657] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:19.164 [2024-11-21 00:07:09.399669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.164 [2024-11-21 00:07:09.399675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:19.164 [2024-11-21 00:07:09.399684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:18:19.164 [2024-11-21 00:07:09.399691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.164 [2024-11-21 00:07:09.400985] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:19.164 [2024-11-21 00:07:09.403802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.164 [2024-11-21 00:07:09.403924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:19.164 [2024-11-21 00:07:09.403938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:18:19.164 [2024-11-21 00:07:09.403945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.164 [2024-11-21 00:07:09.403989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.164 [2024-11-21 00:07:09.403998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:19.164 [2024-11-21 00:07:09.404005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:19.164 [2024-11-21 00:07:09.404012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.164 [2024-11-21 00:07:09.410250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.164 [2024-11-21 00:07:09.410275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:19.164 [2024-11-21 00:07:09.410283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.194 ms 00:18:19.164 [2024-11-21 00:07:09.410293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.164 [2024-11-21 00:07:09.410369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.164 [2024-11-21 00:07:09.410378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:19.164 [2024-11-21 00:07:09.410384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:19.164 [2024-11-21 00:07:09.410390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.164 [2024-11-21 00:07:09.410426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.164 [2024-11-21 00:07:09.410436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:19.164 [2024-11-21 00:07:09.410443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:19.164 [2024-11-21 00:07:09.410449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.164 [2024-11-21 00:07:09.410473] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:19.164 [2024-11-21 00:07:09.412005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.164 [2024-11-21 00:07:09.412027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:19.164 [2024-11-21 00:07:09.412034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.538 ms 00:18:19.164 [2024-11-21 00:07:09.412040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.164 [2024-11-21 00:07:09.412065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.164 [2024-11-21 00:07:09.412072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:19.164 [2024-11-21 00:07:09.412078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:19.164 [2024-11-21 00:07:09.412088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.164 [2024-11-21 00:07:09.412107] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:19.164 [2024-11-21 00:07:09.412123] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:19.164 [2024-11-21 00:07:09.412160] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:19.164 [2024-11-21 00:07:09.412173] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:19.164 [2024-11-21 00:07:09.412255] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:19.164 [2024-11-21 00:07:09.412267] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:19.164 [2024-11-21 00:07:09.412275] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:19.164 [2024-11-21 00:07:09.412288] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:19.164 [2024-11-21 00:07:09.412305] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:19.164 [2024-11-21 00:07:09.412312] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:19.165 [2024-11-21 00:07:09.412318] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:19.165 [2024-11-21 00:07:09.412324] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:19.165 [2024-11-21 00:07:09.412330] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:19.165 [2024-11-21 00:07:09.412338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.165 [2024-11-21 00:07:09.412344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:19.165 [2024-11-21 00:07:09.412350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:18:19.165 [2024-11-21 00:07:09.412355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.165 [2024-11-21 00:07:09.412418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.165 [2024-11-21 00:07:09.412429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:19.165 [2024-11-21 00:07:09.412437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:19.165 [2024-11-21 00:07:09.412443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.165 [2024-11-21 00:07:09.412522] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:19.165 [2024-11-21 00:07:09.412531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:19.165 [2024-11-21 00:07:09.412537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.165 [2024-11-21 00:07:09.412543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.165 [2024-11-21 00:07:09.412552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:19.165 [2024-11-21 00:07:09.412558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:19.165 [2024-11-21 00:07:09.412564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:19.165 [2024-11-21 00:07:09.412571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:19.165 [2024-11-21 00:07:09.412577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:19.165 [2024-11-21 00:07:09.412582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.165 [2024-11-21 00:07:09.412588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:19.165 [2024-11-21 00:07:09.412594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:19.165 [2024-11-21 00:07:09.412602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.165 [2024-11-21 00:07:09.412610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:19.165 [2024-11-21 00:07:09.412616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:19.165 [2024-11-21 00:07:09.412621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.165 [2024-11-21 00:07:09.412626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:19.165 [2024-11-21 00:07:09.412632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:19.165 [2024-11-21 00:07:09.412636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.165 [2024-11-21 00:07:09.412644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:19.165 [2024-11-21 00:07:09.412649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:19.165 [2024-11-21 00:07:09.412655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.165 [2024-11-21 00:07:09.412661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:19.165 [2024-11-21 00:07:09.412667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:19.165 [2024-11-21 00:07:09.412673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.165 [2024-11-21 00:07:09.412680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:19.165 [2024-11-21 00:07:09.412687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:19.165 [2024-11-21 00:07:09.412692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.165 [2024-11-21 00:07:09.412701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:19.165 [2024-11-21 00:07:09.412708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:19.165 [2024-11-21 00:07:09.412714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.165 [2024-11-21 00:07:09.412720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:19.165 [2024-11-21 00:07:09.412727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:19.165 [2024-11-21 00:07:09.412732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.165 [2024-11-21 00:07:09.412739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:19.165 [2024-11-21 00:07:09.412745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:19.165 [2024-11-21 00:07:09.412751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.165 [2024-11-21 00:07:09.412757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:19.165 [2024-11-21 00:07:09.412763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:19.165 [2024-11-21 00:07:09.412768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.165 [2024-11-21 00:07:09.412775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:19.165 [2024-11-21 00:07:09.412781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:19.165 [2024-11-21 00:07:09.412787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.165 [2024-11-21 00:07:09.412793] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:19.165 [2024-11-21 00:07:09.412802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:19.165 [2024-11-21 00:07:09.412811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.165 [2024-11-21 00:07:09.412818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.165 [2024-11-21 00:07:09.412825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:19.165 [2024-11-21 00:07:09.412831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:19.165 [2024-11-21 00:07:09.412837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:19.165 [2024-11-21 00:07:09.412844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:19.165 [2024-11-21 00:07:09.412850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:19.165 [2024-11-21 00:07:09.412856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:19.165 [2024-11-21 00:07:09.412863] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:19.165 [2024-11-21 00:07:09.412871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.165 [2024-11-21 00:07:09.412880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:19.165 [2024-11-21 00:07:09.412887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:19.165 [2024-11-21 00:07:09.412893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:19.165 [2024-11-21 00:07:09.412899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:19.165 [2024-11-21 00:07:09.412906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:19.165 [2024-11-21 00:07:09.412914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:19.165 [2024-11-21 00:07:09.412920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:19.165 [2024-11-21 00:07:09.412927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:19.165 [2024-11-21 00:07:09.412933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:19.165 [2024-11-21 00:07:09.412944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:19.165 [2024-11-21 00:07:09.412950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:19.165 [2024-11-21 00:07:09.412957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:19.165 [2024-11-21 00:07:09.412964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:19.165 [2024-11-21 00:07:09.412970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:19.165 [2024-11-21 00:07:09.412977] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:19.165 [2024-11-21 00:07:09.412986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.165 [2024-11-21 00:07:09.412995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:19.165 [2024-11-21 00:07:09.413002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:19.165 [2024-11-21 00:07:09.413009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:19.165 [2024-11-21 00:07:09.413015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:19.165 [2024-11-21 00:07:09.413022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.165 [2024-11-21 00:07:09.413031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:19.165 [2024-11-21 00:07:09.413038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:18:19.165 [2024-11-21 00:07:09.413044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.165 [2024-11-21 00:07:09.432023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.165 [2024-11-21 00:07:09.432143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:19.165 [2024-11-21 00:07:09.432162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.943 ms 00:18:19.165 [2024-11-21 00:07:09.432169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.165 [2024-11-21 00:07:09.432244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.165 [2024-11-21 00:07:09.432256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:19.165 [2024-11-21 00:07:09.432263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:19.165 [2024-11-21 00:07:09.432269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.165 [2024-11-21 00:07:09.443262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.165 [2024-11-21 00:07:09.443444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:19.166 [2024-11-21 00:07:09.443466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.933 ms 00:18:19.166 [2024-11-21 00:07:09.443477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.443522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.443534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:19.166 [2024-11-21 00:07:09.443545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:19.166 [2024-11-21 00:07:09.443555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.444026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.444055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:19.166 [2024-11-21 00:07:09.444070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:18:19.166 [2024-11-21 00:07:09.444081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.444256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.444277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:19.166 [2024-11-21 00:07:09.444290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:18:19.166 [2024-11-21 00:07:09.444316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.449941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.449969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:19.166 [2024-11-21 00:07:09.449977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.598 ms 00:18:19.166 [2024-11-21 00:07:09.449983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.452879] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:19.166 [2024-11-21 00:07:09.452908] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:19.166 [2024-11-21 00:07:09.452920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.452927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:19.166 [2024-11-21 00:07:09.452933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.864 ms 00:18:19.166 [2024-11-21 00:07:09.452939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.464645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.464681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:19.166 [2024-11-21 00:07:09.464694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.676 ms 00:18:19.166 [2024-11-21 00:07:09.464701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.466538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.466562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:19.166 [2024-11-21 00:07:09.466569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.804 ms 00:18:19.166 [2024-11-21 00:07:09.466574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.468055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.468078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:19.166 [2024-11-21 00:07:09.468086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.455 ms 00:18:19.166 [2024-11-21 00:07:09.468091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.468364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.468375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:19.166 [2024-11-21 00:07:09.468382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:18:19.166 [2024-11-21 00:07:09.468388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.485733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.485848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:19.166 [2024-11-21 00:07:09.485864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.331 ms 00:18:19.166 [2024-11-21 00:07:09.485872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.491623] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:19.166 [2024-11-21 00:07:09.494002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.494026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:19.166 [2024-11-21 00:07:09.494035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.100 ms 00:18:19.166 [2024-11-21 00:07:09.494045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.494088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.494099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:19.166 [2024-11-21 00:07:09.494106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:19.166 [2024-11-21 00:07:09.494112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.494188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.494198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:19.166 [2024-11-21 00:07:09.494204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:19.166 [2024-11-21 00:07:09.494210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.494231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.494242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:19.166 [2024-11-21 00:07:09.494249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:19.166 [2024-11-21 00:07:09.494254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.494283] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:19.166 [2024-11-21 00:07:09.494322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.494332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:19.166 [2024-11-21 00:07:09.494341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:19.166 [2024-11-21 00:07:09.494348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.497558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.497585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:19.166 [2024-11-21 00:07:09.497594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.193 ms 00:18:19.166 [2024-11-21 00:07:09.497602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.497659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.166 [2024-11-21 00:07:09.497670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:19.166 [2024-11-21 00:07:09.497676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:19.166 [2024-11-21 00:07:09.497682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.166 [2024-11-21 00:07:09.498549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.867 ms, result 0 00:18:20.108  [2024-11-21T00:07:11.905Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-21T00:07:12.841Z] Copying: 33/1024 [MB] (19 MBps) [2024-11-21T00:07:13.782Z] Copying: 57/1024 [MB] (23 MBps) [2024-11-21T00:07:14.716Z] Copying: 76/1024 [MB] (19 MBps) [2024-11-21T00:07:15.654Z] Copying: 87/1024 [MB] (10 MBps) [2024-11-21T00:07:16.585Z] Copying: 98/1024 [MB] (10 MBps) [2024-11-21T00:07:17.518Z] Copying: 109/1024 [MB] (11 MBps) [2024-11-21T00:07:18.889Z] Copying: 122/1024 [MB] (12 MBps) [2024-11-21T00:07:19.822Z] Copying: 137/1024 [MB] (15 MBps) [2024-11-21T00:07:20.762Z] Copying: 149/1024 [MB] (11 MBps) [2024-11-21T00:07:21.697Z] Copying: 162/1024 [MB] (13 MBps) [2024-11-21T00:07:22.632Z] Copying: 173/1024 [MB] (10 MBps) [2024-11-21T00:07:23.567Z] Copying: 185/1024 [MB] (11 MBps) [2024-11-21T00:07:24.946Z] Copying: 196/1024 [MB] (11 MBps) [2024-11-21T00:07:25.880Z] Copying: 207/1024 [MB] (10 MBps) [2024-11-21T00:07:26.816Z] Copying: 218/1024 [MB] (11 MBps) [2024-11-21T00:07:27.758Z] Copying: 236/1024 [MB] (17 MBps) [2024-11-21T00:07:28.697Z] Copying: 247/1024 [MB] (11 MBps) [2024-11-21T00:07:29.631Z] Copying: 260/1024 [MB] (12 MBps) [2024-11-21T00:07:30.565Z] Copying: 277/1024 [MB] (17 MBps) [2024-11-21T00:07:31.938Z] Copying: 289/1024 [MB] (11 MBps) [2024-11-21T00:07:32.881Z] Copying: 302/1024 [MB] (13 MBps) [2024-11-21T00:07:33.820Z] Copying: 314/1024 [MB] (11 MBps) [2024-11-21T00:07:34.755Z] Copying: 324/1024 [MB] (10 MBps) [2024-11-21T00:07:35.691Z] Copying: 336/1024 [MB] (12 MBps) [2024-11-21T00:07:36.664Z] Copying: 348/1024 [MB] (11 MBps) [2024-11-21T00:07:37.610Z] Copying: 360/1024 [MB] (11 MBps) [2024-11-21T00:07:38.548Z] Copying: 370/1024 [MB] (10 MBps) [2024-11-21T00:07:39.936Z] Copying: 387/1024 [MB] (17 MBps) [2024-11-21T00:07:40.878Z] Copying: 397/1024 [MB] (10 MBps) [2024-11-21T00:07:41.818Z] Copying: 417760/1048576 [kB] (10224 kBps) [2024-11-21T00:07:42.753Z] Copying: 423/1024 [MB] (15 MBps) [2024-11-21T00:07:43.689Z] Copying: 438/1024 [MB] (14 MBps) [2024-11-21T00:07:44.624Z] Copying: 449/1024 [MB] (11 MBps) [2024-11-21T00:07:45.559Z] Copying: 461/1024 [MB] (11 MBps) [2024-11-21T00:07:46.938Z] Copying: 476/1024 [MB] (15 MBps) [2024-11-21T00:07:47.532Z] Copying: 492/1024 [MB] (15 MBps) [2024-11-21T00:07:48.915Z] Copying: 504/1024 [MB] (12 MBps) [2024-11-21T00:07:49.848Z] Copying: 516/1024 [MB] (11 MBps) [2024-11-21T00:07:50.783Z] Copying: 531/1024 [MB] (15 MBps) [2024-11-21T00:07:51.723Z] Copying: 542/1024 [MB] (11 MBps) [2024-11-21T00:07:52.665Z] Copying: 555/1024 [MB] (13 MBps) [2024-11-21T00:07:53.599Z] Copying: 565/1024 [MB] (10 MBps) [2024-11-21T00:07:54.533Z] Copying: 577/1024 [MB] (11 MBps) [2024-11-21T00:07:55.907Z] Copying: 589/1024 [MB] (11 MBps) [2024-11-21T00:07:56.841Z] Copying: 601/1024 [MB] (11 MBps) [2024-11-21T00:07:57.775Z] Copying: 613/1024 [MB] (12 MBps) [2024-11-21T00:07:58.709Z] Copying: 625/1024 [MB] (11 MBps) [2024-11-21T00:07:59.641Z] Copying: 637/1024 [MB] (11 MBps) [2024-11-21T00:08:00.573Z] Copying: 649/1024 [MB] (11 MBps) [2024-11-21T00:08:01.945Z] Copying: 665/1024 [MB] (16 MBps) [2024-11-21T00:08:02.883Z] Copying: 677/1024 [MB] (11 MBps) [2024-11-21T00:08:03.818Z] Copying: 688/1024 [MB] (11 MBps) [2024-11-21T00:08:04.753Z] Copying: 702/1024 [MB] (13 MBps) [2024-11-21T00:08:05.759Z] Copying: 724/1024 [MB] (22 MBps) [2024-11-21T00:08:06.694Z] Copying: 737/1024 [MB] (12 MBps) [2024-11-21T00:08:07.629Z] Copying: 750/1024 [MB] (13 MBps) [2024-11-21T00:08:08.564Z] Copying: 763/1024 [MB] (12 MBps) [2024-11-21T00:08:09.941Z] Copying: 775/1024 [MB] (12 MBps) [2024-11-21T00:08:10.879Z] Copying: 787/1024 [MB] (12 MBps) [2024-11-21T00:08:11.813Z] Copying: 798/1024 [MB] (10 MBps) [2024-11-21T00:08:12.748Z] Copying: 810/1024 [MB] (11 MBps) [2024-11-21T00:08:13.690Z] Copying: 822/1024 [MB] (11 MBps) [2024-11-21T00:08:14.630Z] Copying: 834/1024 [MB] (12 MBps) [2024-11-21T00:08:15.568Z] Copying: 846/1024 [MB] (11 MBps) [2024-11-21T00:08:16.941Z] Copying: 857/1024 [MB] (10 MBps) [2024-11-21T00:08:17.874Z] Copying: 869/1024 [MB] (11 MBps) [2024-11-21T00:08:18.810Z] Copying: 880/1024 [MB] (11 MBps) [2024-11-21T00:08:19.745Z] Copying: 892/1024 [MB] (12 MBps) [2024-11-21T00:08:20.679Z] Copying: 905/1024 [MB] (12 MBps) [2024-11-21T00:08:21.621Z] Copying: 916/1024 [MB] (11 MBps) [2024-11-21T00:08:22.563Z] Copying: 928/1024 [MB] (11 MBps) [2024-11-21T00:08:23.945Z] Copying: 940/1024 [MB] (11 MBps) [2024-11-21T00:08:24.517Z] Copying: 955/1024 [MB] (14 MBps) [2024-11-21T00:08:25.905Z] Copying: 981/1024 [MB] (26 MBps) [2024-11-21T00:08:26.849Z] Copying: 995/1024 [MB] (14 MBps) [2024-11-21T00:08:27.792Z] Copying: 1007/1024 [MB] (12 MBps) [2024-11-21T00:08:27.792Z] Copying: 1020/1024 [MB] (12 MBps) [2024-11-21T00:08:27.792Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-21 00:08:27.775034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.371 [2024-11-21 00:08:27.775096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:37.371 [2024-11-21 00:08:27.775112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:37.371 [2024-11-21 00:08:27.775121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.371 [2024-11-21 00:08:27.775147] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:37.371 [2024-11-21 00:08:27.775942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.371 [2024-11-21 00:08:27.775977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:37.371 [2024-11-21 00:08:27.775990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:19:37.371 [2024-11-21 00:08:27.776000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.371 [2024-11-21 00:08:27.779900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.371 [2024-11-21 00:08:27.780040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:37.371 [2024-11-21 00:08:27.780079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.859 ms 00:19:37.371 [2024-11-21 00:08:27.780106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-11-21 00:08:27.803626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-11-21 00:08:27.803686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:37.634 [2024-11-21 00:08:27.803700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.475 ms 00:19:37.634 [2024-11-21 00:08:27.803710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-11-21 00:08:27.810085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-11-21 00:08:27.810314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:37.634 [2024-11-21 00:08:27.810338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.330 ms 00:19:37.634 [2024-11-21 00:08:27.810348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-11-21 00:08:27.814048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-11-21 00:08:27.814237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:37.634 [2024-11-21 00:08:27.814258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.619 ms 00:19:37.634 [2024-11-21 00:08:27.814267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-11-21 00:08:27.820547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-11-21 00:08:27.820739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:37.634 [2024-11-21 00:08:27.820760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.129 ms 00:19:37.634 [2024-11-21 00:08:27.820771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-11-21 00:08:27.821057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-11-21 00:08:27.821087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:37.634 [2024-11-21 00:08:27.821100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:37.634 [2024-11-21 00:08:27.821109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-11-21 00:08:27.824706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-11-21 00:08:27.824755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:37.634 [2024-11-21 00:08:27.824765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.578 ms 00:19:37.634 [2024-11-21 00:08:27.824773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-11-21 00:08:27.827226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-11-21 00:08:27.827275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:37.634 [2024-11-21 00:08:27.827286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.411 ms 00:19:37.634 [2024-11-21 00:08:27.827293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-11-21 00:08:27.829030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-11-21 00:08:27.829078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:37.634 [2024-11-21 00:08:27.829090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:19:37.634 [2024-11-21 00:08:27.829097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-11-21 00:08:27.831047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-11-21 00:08:27.831230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:37.634 [2024-11-21 00:08:27.831249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.876 ms 00:19:37.634 [2024-11-21 00:08:27.831257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.635 [2024-11-21 00:08:27.831587] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:37.635 [2024-11-21 00:08:27.831639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.831992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-11-21 00:08:27.832421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:37.636 [2024-11-21 00:08:27.832429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:37.636 [2024-11-21 00:08:27.832437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:37.636 [2024-11-21 00:08:27.832445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:37.636 [2024-11-21 00:08:27.832453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:37.636 [2024-11-21 00:08:27.832462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:37.636 [2024-11-21 00:08:27.832470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:37.636 [2024-11-21 00:08:27.832478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:37.636 [2024-11-21 00:08:27.832485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:37.636 [2024-11-21 00:08:27.832493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:37.636 [2024-11-21 00:08:27.832501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:37.636 [2024-11-21 00:08:27.832509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:37.636 [2024-11-21 00:08:27.832529] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:37.636 [2024-11-21 00:08:27.832544] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5652dacb-6cf0-4515-b30c-e9fc0a790556 00:19:37.636 [2024-11-21 00:08:27.832553] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:37.636 [2024-11-21 00:08:27.832564] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:37.636 [2024-11-21 00:08:27.832573] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:37.636 [2024-11-21 00:08:27.832581] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:37.636 [2024-11-21 00:08:27.832590] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:37.636 [2024-11-21 00:08:27.832599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:37.636 [2024-11-21 00:08:27.832608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:37.636 [2024-11-21 00:08:27.832615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:37.636 [2024-11-21 00:08:27.832622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:37.636 [2024-11-21 00:08:27.832632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-11-21 00:08:27.832641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:37.636 [2024-11-21 00:08:27.832657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:19:37.636 [2024-11-21 00:08:27.832668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.835891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-11-21 00:08:27.835927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:37.636 [2024-11-21 00:08:27.835938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.179 ms 00:19:37.636 [2024-11-21 00:08:27.835948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.836112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-11-21 00:08:27.836126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:37.636 [2024-11-21 00:08:27.836136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:19:37.636 [2024-11-21 00:08:27.836144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.845477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.636 [2024-11-21 00:08:27.845699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.636 [2024-11-21 00:08:27.845720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.636 [2024-11-21 00:08:27.845730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.845803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.636 [2024-11-21 00:08:27.845821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.636 [2024-11-21 00:08:27.845835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.636 [2024-11-21 00:08:27.845845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.845913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.636 [2024-11-21 00:08:27.845930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.636 [2024-11-21 00:08:27.845940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.636 [2024-11-21 00:08:27.845950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.845969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.636 [2024-11-21 00:08:27.845978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.636 [2024-11-21 00:08:27.845993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.636 [2024-11-21 00:08:27.846001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.865653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.636 [2024-11-21 00:08:27.865710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.636 [2024-11-21 00:08:27.865723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.636 [2024-11-21 00:08:27.865732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.880620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.636 [2024-11-21 00:08:27.880889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.636 [2024-11-21 00:08:27.880923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.636 [2024-11-21 00:08:27.880933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.880990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.636 [2024-11-21 00:08:27.881001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.636 [2024-11-21 00:08:27.881011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.636 [2024-11-21 00:08:27.881019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.881057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.636 [2024-11-21 00:08:27.881066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.636 [2024-11-21 00:08:27.881075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.636 [2024-11-21 00:08:27.881089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.881183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.636 [2024-11-21 00:08:27.881195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.636 [2024-11-21 00:08:27.881222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.636 [2024-11-21 00:08:27.881231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.881275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.636 [2024-11-21 00:08:27.881285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:37.636 [2024-11-21 00:08:27.881318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.636 [2024-11-21 00:08:27.881327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.881384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.636 [2024-11-21 00:08:27.881400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:37.636 [2024-11-21 00:08:27.881411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.636 [2024-11-21 00:08:27.881420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.881486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.636 [2024-11-21 00:08:27.881500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:37.636 [2024-11-21 00:08:27.881511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.636 [2024-11-21 00:08:27.881521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-11-21 00:08:27.881694] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 106.605 ms, result 0 00:19:38.210 00:19:38.210 00:19:38.210 00:08:28 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:38.210 [2024-11-21 00:08:28.467909] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:38.210 [2024-11-21 00:08:28.468049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87077 ] 00:19:38.210 [2024-11-21 00:08:28.606509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.471 [2024-11-21 00:08:28.678110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.471 [2024-11-21 00:08:28.827639] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.471 [2024-11-21 00:08:28.827742] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.734 [2024-11-21 00:08:28.990451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.734 [2024-11-21 00:08:28.990513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:38.734 [2024-11-21 00:08:28.990533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:38.734 [2024-11-21 00:08:28.990542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.734 [2024-11-21 00:08:28.990603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.734 [2024-11-21 00:08:28.990613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.734 [2024-11-21 00:08:28.990623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:38.734 [2024-11-21 00:08:28.990639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.734 [2024-11-21 00:08:28.990662] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:38.734 [2024-11-21 00:08:28.990948] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:38.734 [2024-11-21 00:08:28.990966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.734 [2024-11-21 00:08:28.990976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.734 [2024-11-21 00:08:28.990988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:19:38.734 [2024-11-21 00:08:28.991003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.734 [2024-11-21 00:08:28.993332] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:38.734 [2024-11-21 00:08:28.998071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.734 [2024-11-21 00:08:28.998124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:38.734 [2024-11-21 00:08:28.998137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.741 ms 00:19:38.734 [2024-11-21 00:08:28.998146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.734 [2024-11-21 00:08:28.998230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.734 [2024-11-21 00:08:28.998244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:38.734 [2024-11-21 00:08:28.998257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:38.735 [2024-11-21 00:08:28.998265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.735 [2024-11-21 00:08:29.010093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.735 [2024-11-21 00:08:29.010148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.735 [2024-11-21 00:08:29.010161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.747 ms 00:19:38.735 [2024-11-21 00:08:29.010174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.735 [2024-11-21 00:08:29.010284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.735 [2024-11-21 00:08:29.010295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.735 [2024-11-21 00:08:29.010336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:38.735 [2024-11-21 00:08:29.010347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.735 [2024-11-21 00:08:29.010418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.735 [2024-11-21 00:08:29.010431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:38.735 [2024-11-21 00:08:29.010445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:38.735 [2024-11-21 00:08:29.010454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.735 [2024-11-21 00:08:29.010482] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:38.735 [2024-11-21 00:08:29.013182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.735 [2024-11-21 00:08:29.013239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.735 [2024-11-21 00:08:29.013259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:19:38.735 [2024-11-21 00:08:29.013269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.735 [2024-11-21 00:08:29.013325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.735 [2024-11-21 00:08:29.013334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:38.735 [2024-11-21 00:08:29.013347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:38.735 [2024-11-21 00:08:29.013356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.735 [2024-11-21 00:08:29.013382] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:38.735 [2024-11-21 00:08:29.013414] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:38.735 [2024-11-21 00:08:29.013462] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:38.735 [2024-11-21 00:08:29.013480] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:38.735 [2024-11-21 00:08:29.013595] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:38.735 [2024-11-21 00:08:29.013608] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:38.735 [2024-11-21 00:08:29.013620] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:38.735 [2024-11-21 00:08:29.013633] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:38.735 [2024-11-21 00:08:29.013646] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:38.735 [2024-11-21 00:08:29.013655] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:38.735 [2024-11-21 00:08:29.013665] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:38.735 [2024-11-21 00:08:29.013676] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:38.735 [2024-11-21 00:08:29.013689] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:38.735 [2024-11-21 00:08:29.013697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.735 [2024-11-21 00:08:29.013706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:38.735 [2024-11-21 00:08:29.013717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:19:38.735 [2024-11-21 00:08:29.013725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.735 [2024-11-21 00:08:29.013813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.735 [2024-11-21 00:08:29.013827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:38.735 [2024-11-21 00:08:29.013837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:38.735 [2024-11-21 00:08:29.013849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.735 [2024-11-21 00:08:29.013949] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:38.735 [2024-11-21 00:08:29.013962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:38.735 [2024-11-21 00:08:29.013971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.735 [2024-11-21 00:08:29.013979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.735 [2024-11-21 00:08:29.013988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:38.735 [2024-11-21 00:08:29.013995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:38.735 [2024-11-21 00:08:29.014002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:38.735 [2024-11-21 00:08:29.014011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:38.735 [2024-11-21 00:08:29.014018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:38.735 [2024-11-21 00:08:29.014026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.735 [2024-11-21 00:08:29.014034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:38.735 [2024-11-21 00:08:29.014041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:38.735 [2024-11-21 00:08:29.014051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.735 [2024-11-21 00:08:29.014058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:38.735 [2024-11-21 00:08:29.014065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:38.735 [2024-11-21 00:08:29.014074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.735 [2024-11-21 00:08:29.014084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:38.735 [2024-11-21 00:08:29.014093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:38.735 [2024-11-21 00:08:29.014101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.735 [2024-11-21 00:08:29.014109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:38.735 [2024-11-21 00:08:29.014116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:38.735 [2024-11-21 00:08:29.014123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.735 [2024-11-21 00:08:29.014131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:38.735 [2024-11-21 00:08:29.014138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:38.735 [2024-11-21 00:08:29.014145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.735 [2024-11-21 00:08:29.014151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:38.735 [2024-11-21 00:08:29.014159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:38.735 [2024-11-21 00:08:29.014166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.735 [2024-11-21 00:08:29.014180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:38.735 [2024-11-21 00:08:29.014189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:38.735 [2024-11-21 00:08:29.014197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.735 [2024-11-21 00:08:29.014204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:38.735 [2024-11-21 00:08:29.014211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:38.735 [2024-11-21 00:08:29.014218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.735 [2024-11-21 00:08:29.014225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:38.735 [2024-11-21 00:08:29.014232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:38.735 [2024-11-21 00:08:29.014239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.735 [2024-11-21 00:08:29.014247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:38.735 [2024-11-21 00:08:29.014254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:38.735 [2024-11-21 00:08:29.014261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.735 [2024-11-21 00:08:29.014267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:38.735 [2024-11-21 00:08:29.014274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:38.735 [2024-11-21 00:08:29.014283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.735 [2024-11-21 00:08:29.014291] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:38.735 [2024-11-21 00:08:29.014695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:38.735 [2024-11-21 00:08:29.014726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.735 [2024-11-21 00:08:29.014750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.735 [2024-11-21 00:08:29.014773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:38.735 [2024-11-21 00:08:29.014793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:38.735 [2024-11-21 00:08:29.014817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:38.735 [2024-11-21 00:08:29.014836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:38.735 [2024-11-21 00:08:29.014856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:38.735 [2024-11-21 00:08:29.014875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:38.735 [2024-11-21 00:08:29.014897] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:38.735 [2024-11-21 00:08:29.014995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.735 [2024-11-21 00:08:29.015032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:38.735 [2024-11-21 00:08:29.015062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:38.735 [2024-11-21 00:08:29.015094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:38.735 [2024-11-21 00:08:29.015122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:38.736 [2024-11-21 00:08:29.015528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:38.736 [2024-11-21 00:08:29.015583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:38.736 [2024-11-21 00:08:29.015615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:38.736 [2024-11-21 00:08:29.015645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:38.736 [2024-11-21 00:08:29.015674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:38.736 [2024-11-21 00:08:29.015784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:38.736 [2024-11-21 00:08:29.015814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:38.736 [2024-11-21 00:08:29.015823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:38.736 [2024-11-21 00:08:29.015833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:38.736 [2024-11-21 00:08:29.015842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:38.736 [2024-11-21 00:08:29.015849] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:38.736 [2024-11-21 00:08:29.015860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.736 [2024-11-21 00:08:29.015868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:38.736 [2024-11-21 00:08:29.015876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:38.736 [2024-11-21 00:08:29.015885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:38.736 [2024-11-21 00:08:29.015892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:38.736 [2024-11-21 00:08:29.015905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.015920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:38.736 [2024-11-21 00:08:29.015932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:19:38.736 [2024-11-21 00:08:29.015940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.048241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.048622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.736 [2024-11-21 00:08:29.048674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.195 ms 00:19:38.736 [2024-11-21 00:08:29.048695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.048898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.048922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:38.736 [2024-11-21 00:08:29.048941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:19:38.736 [2024-11-21 00:08:29.048958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.065506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.065552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.736 [2024-11-21 00:08:29.065565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.421 ms 00:19:38.736 [2024-11-21 00:08:29.065574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.065620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.065630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.736 [2024-11-21 00:08:29.065643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:38.736 [2024-11-21 00:08:29.065651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.066424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.066464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.736 [2024-11-21 00:08:29.066482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:19:38.736 [2024-11-21 00:08:29.066493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.066660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.066683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.736 [2024-11-21 00:08:29.066692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:19:38.736 [2024-11-21 00:08:29.066701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.076278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.076485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.736 [2024-11-21 00:08:29.076512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.545 ms 00:19:38.736 [2024-11-21 00:08:29.076522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.081333] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:38.736 [2024-11-21 00:08:29.081387] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:38.736 [2024-11-21 00:08:29.081401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.081411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:38.736 [2024-11-21 00:08:29.081423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.749 ms 00:19:38.736 [2024-11-21 00:08:29.081431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.100594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.100653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:38.736 [2024-11-21 00:08:29.100666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.105 ms 00:19:38.736 [2024-11-21 00:08:29.100675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.103859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.104042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:38.736 [2024-11-21 00:08:29.104060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.130 ms 00:19:38.736 [2024-11-21 00:08:29.104069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.106990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.107039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:38.736 [2024-11-21 00:08:29.107050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:19:38.736 [2024-11-21 00:08:29.107058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.107575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.107631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:38.736 [2024-11-21 00:08:29.107656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:19:38.736 [2024-11-21 00:08:29.107676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.139526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.736 [2024-11-21 00:08:29.139720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:38.736 [2024-11-21 00:08:29.139740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.761 ms 00:19:38.736 [2024-11-21 00:08:29.139756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.736 [2024-11-21 00:08:29.148191] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:38.999 [2024-11-21 00:08:29.151756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.999 [2024-11-21 00:08:29.151801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:38.999 [2024-11-21 00:08:29.151820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.954 ms 00:19:38.999 [2024-11-21 00:08:29.151834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.999 [2024-11-21 00:08:29.151915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.999 [2024-11-21 00:08:29.151928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:38.999 [2024-11-21 00:08:29.151939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:38.999 [2024-11-21 00:08:29.151949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.999 [2024-11-21 00:08:29.152037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.999 [2024-11-21 00:08:29.152049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:38.999 [2024-11-21 00:08:29.152059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:38.999 [2024-11-21 00:08:29.152070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.999 [2024-11-21 00:08:29.152107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.999 [2024-11-21 00:08:29.152120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:38.999 [2024-11-21 00:08:29.152130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:38.999 [2024-11-21 00:08:29.152140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.000 [2024-11-21 00:08:29.152190] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:39.000 [2024-11-21 00:08:29.152202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.000 [2024-11-21 00:08:29.152216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:39.000 [2024-11-21 00:08:29.152226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:39.000 [2024-11-21 00:08:29.152235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.000 [2024-11-21 00:08:29.158660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.000 [2024-11-21 00:08:29.158713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:39.000 [2024-11-21 00:08:29.158727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.401 ms 00:19:39.000 [2024-11-21 00:08:29.158736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.000 [2024-11-21 00:08:29.158839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.000 [2024-11-21 00:08:29.158852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:39.000 [2024-11-21 00:08:29.158863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:39.000 [2024-11-21 00:08:29.158877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.000 [2024-11-21 00:08:29.160273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 169.276 ms, result 0 00:19:39.946  [2024-11-21T00:08:31.755Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-21T00:08:32.700Z] Copying: 36/1024 [MB] (18 MBps) [2024-11-21T00:08:33.668Z] Copying: 54/1024 [MB] (18 MBps) [2024-11-21T00:08:34.609Z] Copying: 72/1024 [MB] (17 MBps) [2024-11-21T00:08:35.545Z] Copying: 82/1024 [MB] (10 MBps) [2024-11-21T00:08:36.490Z] Copying: 94/1024 [MB] (12 MBps) [2024-11-21T00:08:37.434Z] Copying: 109/1024 [MB] (14 MBps) [2024-11-21T00:08:38.380Z] Copying: 119/1024 [MB] (10 MBps) [2024-11-21T00:08:39.760Z] Copying: 130/1024 [MB] (10 MBps) [2024-11-21T00:08:40.695Z] Copying: 141/1024 [MB] (11 MBps) [2024-11-21T00:08:41.633Z] Copying: 152/1024 [MB] (11 MBps) [2024-11-21T00:08:42.578Z] Copying: 164/1024 [MB] (11 MBps) [2024-11-21T00:08:43.514Z] Copying: 176/1024 [MB] (12 MBps) [2024-11-21T00:08:44.450Z] Copying: 188/1024 [MB] (11 MBps) [2024-11-21T00:08:45.389Z] Copying: 200/1024 [MB] (11 MBps) [2024-11-21T00:08:46.771Z] Copying: 211/1024 [MB] (11 MBps) [2024-11-21T00:08:47.707Z] Copying: 226/1024 [MB] (14 MBps) [2024-11-21T00:08:48.642Z] Copying: 237/1024 [MB] (11 MBps) [2024-11-21T00:08:49.577Z] Copying: 250/1024 [MB] (12 MBps) [2024-11-21T00:08:50.518Z] Copying: 263/1024 [MB] (13 MBps) [2024-11-21T00:08:51.462Z] Copying: 276/1024 [MB] (12 MBps) [2024-11-21T00:08:52.408Z] Copying: 291/1024 [MB] (15 MBps) [2024-11-21T00:08:53.353Z] Copying: 305/1024 [MB] (14 MBps) [2024-11-21T00:08:54.729Z] Copying: 315/1024 [MB] (10 MBps) [2024-11-21T00:08:55.669Z] Copying: 327/1024 [MB] (11 MBps) [2024-11-21T00:08:56.605Z] Copying: 339/1024 [MB] (11 MBps) [2024-11-21T00:08:57.568Z] Copying: 350/1024 [MB] (11 MBps) [2024-11-21T00:08:58.505Z] Copying: 362/1024 [MB] (11 MBps) [2024-11-21T00:08:59.444Z] Copying: 374/1024 [MB] (12 MBps) [2024-11-21T00:09:00.384Z] Copying: 386/1024 [MB] (11 MBps) [2024-11-21T00:09:01.770Z] Copying: 403/1024 [MB] (16 MBps) [2024-11-21T00:09:02.758Z] Copying: 415/1024 [MB] (12 MBps) [2024-11-21T00:09:03.713Z] Copying: 427/1024 [MB] (11 MBps) [2024-11-21T00:09:04.647Z] Copying: 438/1024 [MB] (10 MBps) [2024-11-21T00:09:05.580Z] Copying: 449/1024 [MB] (11 MBps) [2024-11-21T00:09:06.514Z] Copying: 461/1024 [MB] (11 MBps) [2024-11-21T00:09:07.448Z] Copying: 473/1024 [MB] (11 MBps) [2024-11-21T00:09:08.382Z] Copying: 485/1024 [MB] (11 MBps) [2024-11-21T00:09:09.758Z] Copying: 497/1024 [MB] (12 MBps) [2024-11-21T00:09:10.696Z] Copying: 509/1024 [MB] (12 MBps) [2024-11-21T00:09:11.631Z] Copying: 520/1024 [MB] (10 MBps) [2024-11-21T00:09:12.568Z] Copying: 531/1024 [MB] (11 MBps) [2024-11-21T00:09:13.506Z] Copying: 544/1024 [MB] (13 MBps) [2024-11-21T00:09:14.448Z] Copying: 555/1024 [MB] (11 MBps) [2024-11-21T00:09:15.386Z] Copying: 574/1024 [MB] (18 MBps) [2024-11-21T00:09:16.765Z] Copying: 585/1024 [MB] (11 MBps) [2024-11-21T00:09:17.699Z] Copying: 596/1024 [MB] (11 MBps) [2024-11-21T00:09:18.639Z] Copying: 608/1024 [MB] (12 MBps) [2024-11-21T00:09:19.574Z] Copying: 620/1024 [MB] (11 MBps) [2024-11-21T00:09:20.509Z] Copying: 631/1024 [MB] (11 MBps) [2024-11-21T00:09:21.450Z] Copying: 643/1024 [MB] (11 MBps) [2024-11-21T00:09:22.394Z] Copying: 655/1024 [MB] (11 MBps) [2024-11-21T00:09:23.771Z] Copying: 668/1024 [MB] (13 MBps) [2024-11-21T00:09:24.712Z] Copying: 680/1024 [MB] (12 MBps) [2024-11-21T00:09:25.647Z] Copying: 692/1024 [MB] (11 MBps) [2024-11-21T00:09:26.581Z] Copying: 704/1024 [MB] (11 MBps) [2024-11-21T00:09:27.522Z] Copying: 716/1024 [MB] (11 MBps) [2024-11-21T00:09:28.460Z] Copying: 728/1024 [MB] (12 MBps) [2024-11-21T00:09:29.398Z] Copying: 739/1024 [MB] (10 MBps) [2024-11-21T00:09:30.769Z] Copying: 750/1024 [MB] (11 MBps) [2024-11-21T00:09:31.735Z] Copying: 761/1024 [MB] (11 MBps) [2024-11-21T00:09:32.671Z] Copying: 773/1024 [MB] (11 MBps) [2024-11-21T00:09:33.605Z] Copying: 784/1024 [MB] (11 MBps) [2024-11-21T00:09:34.540Z] Copying: 796/1024 [MB] (11 MBps) [2024-11-21T00:09:35.475Z] Copying: 808/1024 [MB] (12 MBps) [2024-11-21T00:09:36.411Z] Copying: 820/1024 [MB] (12 MBps) [2024-11-21T00:09:37.351Z] Copying: 832/1024 [MB] (12 MBps) [2024-11-21T00:09:38.725Z] Copying: 844/1024 [MB] (11 MBps) [2024-11-21T00:09:39.667Z] Copying: 856/1024 [MB] (12 MBps) [2024-11-21T00:09:40.604Z] Copying: 868/1024 [MB] (11 MBps) [2024-11-21T00:09:41.543Z] Copying: 879/1024 [MB] (11 MBps) [2024-11-21T00:09:42.482Z] Copying: 892/1024 [MB] (12 MBps) [2024-11-21T00:09:43.417Z] Copying: 907/1024 [MB] (14 MBps) [2024-11-21T00:09:44.359Z] Copying: 919/1024 [MB] (12 MBps) [2024-11-21T00:09:45.743Z] Copying: 930/1024 [MB] (11 MBps) [2024-11-21T00:09:46.687Z] Copying: 950/1024 [MB] (20 MBps) [2024-11-21T00:09:47.631Z] Copying: 963/1024 [MB] (12 MBps) [2024-11-21T00:09:48.578Z] Copying: 979/1024 [MB] (16 MBps) [2024-11-21T00:09:49.523Z] Copying: 990/1024 [MB] (11 MBps) [2024-11-21T00:09:50.468Z] Copying: 1005/1024 [MB] (14 MBps) [2024-11-21T00:09:51.414Z] Copying: 1015/1024 [MB] (10 MBps) [2024-11-21T00:09:51.415Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-21 00:09:51.250600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.994 [2024-11-21 00:09:51.250694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:00.994 [2024-11-21 00:09:51.250718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:00.994 [2024-11-21 00:09:51.250737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.994 [2024-11-21 00:09:51.250775] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:00.994 [2024-11-21 00:09:51.251852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.994 [2024-11-21 00:09:51.251890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:00.994 [2024-11-21 00:09:51.251908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:21:00.994 [2024-11-21 00:09:51.251921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.994 [2024-11-21 00:09:51.253497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.994 [2024-11-21 00:09:51.253538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:00.994 [2024-11-21 00:09:51.253553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.544 ms 00:21:00.994 [2024-11-21 00:09:51.253565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.994 [2024-11-21 00:09:51.259121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.994 [2024-11-21 00:09:51.259181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:00.994 [2024-11-21 00:09:51.259196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.534 ms 00:21:00.994 [2024-11-21 00:09:51.259207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.994 [2024-11-21 00:09:51.266118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.994 [2024-11-21 00:09:51.266164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:00.994 [2024-11-21 00:09:51.266176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.880 ms 00:21:00.994 [2024-11-21 00:09:51.266185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.994 [2024-11-21 00:09:51.269429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.994 [2024-11-21 00:09:51.269487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:00.994 [2024-11-21 00:09:51.269497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.171 ms 00:21:00.994 [2024-11-21 00:09:51.269506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.994 [2024-11-21 00:09:51.275899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.994 [2024-11-21 00:09:51.275955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:00.994 [2024-11-21 00:09:51.275967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.343 ms 00:21:00.994 [2024-11-21 00:09:51.275977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.994 [2024-11-21 00:09:51.276124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.994 [2024-11-21 00:09:51.276138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:00.994 [2024-11-21 00:09:51.276150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:21:00.994 [2024-11-21 00:09:51.276163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.994 [2024-11-21 00:09:51.279726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.994 [2024-11-21 00:09:51.279777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:00.994 [2024-11-21 00:09:51.279788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.545 ms 00:21:00.994 [2024-11-21 00:09:51.279797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.994 [2024-11-21 00:09:51.282806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.994 [2024-11-21 00:09:51.282855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:00.994 [2024-11-21 00:09:51.282865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.960 ms 00:21:00.994 [2024-11-21 00:09:51.282873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.994 [2024-11-21 00:09:51.285362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.994 [2024-11-21 00:09:51.285408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:00.994 [2024-11-21 00:09:51.285422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.444 ms 00:21:00.994 [2024-11-21 00:09:51.285431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.994 [2024-11-21 00:09:51.287904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.994 [2024-11-21 00:09:51.287952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:00.994 [2024-11-21 00:09:51.287963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.391 ms 00:21:00.994 [2024-11-21 00:09:51.287971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.994 [2024-11-21 00:09:51.288014] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:00.994 [2024-11-21 00:09:51.288039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:00.994 [2024-11-21 00:09:51.288208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:00.995 [2024-11-21 00:09:51.288855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:00.996 [2024-11-21 00:09:51.288863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:00.996 [2024-11-21 00:09:51.288872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:00.996 [2024-11-21 00:09:51.288879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:00.996 [2024-11-21 00:09:51.288895] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:00.996 [2024-11-21 00:09:51.288905] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5652dacb-6cf0-4515-b30c-e9fc0a790556 00:21:00.996 [2024-11-21 00:09:51.288914] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:00.996 [2024-11-21 00:09:51.288922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:00.996 [2024-11-21 00:09:51.288930] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:00.996 [2024-11-21 00:09:51.288939] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:00.996 [2024-11-21 00:09:51.288947] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:00.996 [2024-11-21 00:09:51.288956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:00.996 [2024-11-21 00:09:51.288965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:00.996 [2024-11-21 00:09:51.288971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:00.996 [2024-11-21 00:09:51.288978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:00.996 [2024-11-21 00:09:51.288986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.996 [2024-11-21 00:09:51.288995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:00.996 [2024-11-21 00:09:51.289016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:21:00.996 [2024-11-21 00:09:51.289031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.292182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.996 [2024-11-21 00:09:51.292429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:00.996 [2024-11-21 00:09:51.292452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.131 ms 00:21:00.996 [2024-11-21 00:09:51.292461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.292624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.996 [2024-11-21 00:09:51.292640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:00.996 [2024-11-21 00:09:51.292651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:21:00.996 [2024-11-21 00:09:51.292659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.302080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.996 [2024-11-21 00:09:51.302135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.996 [2024-11-21 00:09:51.302147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.996 [2024-11-21 00:09:51.302156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.302221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.996 [2024-11-21 00:09:51.302237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.996 [2024-11-21 00:09:51.302246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.996 [2024-11-21 00:09:51.302254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.302363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.996 [2024-11-21 00:09:51.302377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.996 [2024-11-21 00:09:51.302386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.996 [2024-11-21 00:09:51.302394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.302412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.996 [2024-11-21 00:09:51.302426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.996 [2024-11-21 00:09:51.302442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.996 [2024-11-21 00:09:51.302452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.322016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.996 [2024-11-21 00:09:51.322076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.996 [2024-11-21 00:09:51.322098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.996 [2024-11-21 00:09:51.322107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.337200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.996 [2024-11-21 00:09:51.337258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.996 [2024-11-21 00:09:51.337278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.996 [2024-11-21 00:09:51.337287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.337378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.996 [2024-11-21 00:09:51.337389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.996 [2024-11-21 00:09:51.337398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.996 [2024-11-21 00:09:51.337407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.337454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.996 [2024-11-21 00:09:51.337470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.996 [2024-11-21 00:09:51.337482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.996 [2024-11-21 00:09:51.337495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.337579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.996 [2024-11-21 00:09:51.337590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.996 [2024-11-21 00:09:51.337604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.996 [2024-11-21 00:09:51.337612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.337651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.996 [2024-11-21 00:09:51.337664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:00.996 [2024-11-21 00:09:51.337673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.996 [2024-11-21 00:09:51.337682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.337736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.996 [2024-11-21 00:09:51.337748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.996 [2024-11-21 00:09:51.337758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.996 [2024-11-21 00:09:51.337766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.337821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.996 [2024-11-21 00:09:51.337833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.996 [2024-11-21 00:09:51.337845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.996 [2024-11-21 00:09:51.337860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.996 [2024-11-21 00:09:51.338018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 87.384 ms, result 0 00:21:01.258 00:21:01.258 00:21:01.258 00:09:51 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:03.807 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:03.807 00:09:53 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:03.807 [2024-11-21 00:09:53.949037] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:03.807 [2024-11-21 00:09:53.949198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87959 ] 00:21:03.807 [2024-11-21 00:09:54.086780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.807 [2024-11-21 00:09:54.159449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.068 [2024-11-21 00:09:54.308231] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:04.068 [2024-11-21 00:09:54.308350] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:04.068 [2024-11-21 00:09:54.471365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.068 [2024-11-21 00:09:54.471426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:04.068 [2024-11-21 00:09:54.471448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:04.068 [2024-11-21 00:09:54.471458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.068 [2024-11-21 00:09:54.471526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.068 [2024-11-21 00:09:54.471538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.068 [2024-11-21 00:09:54.471554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:04.068 [2024-11-21 00:09:54.471572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.068 [2024-11-21 00:09:54.471595] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:04.068 [2024-11-21 00:09:54.471873] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:04.068 [2024-11-21 00:09:54.471895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.068 [2024-11-21 00:09:54.471908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.068 [2024-11-21 00:09:54.471921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:21:04.068 [2024-11-21 00:09:54.471930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.068 [2024-11-21 00:09:54.474207] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:04.068 [2024-11-21 00:09:54.478967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.068 [2024-11-21 00:09:54.479021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:04.068 [2024-11-21 00:09:54.479035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.762 ms 00:21:04.068 [2024-11-21 00:09:54.479044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.068 [2024-11-21 00:09:54.479126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.068 [2024-11-21 00:09:54.479140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:04.068 [2024-11-21 00:09:54.479153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:04.068 [2024-11-21 00:09:54.479161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.332 [2024-11-21 00:09:54.490758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.332 [2024-11-21 00:09:54.490803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.332 [2024-11-21 00:09:54.490817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.551 ms 00:21:04.332 [2024-11-21 00:09:54.490825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.332 [2024-11-21 00:09:54.490950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.332 [2024-11-21 00:09:54.490961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.332 [2024-11-21 00:09:54.490974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:21:04.332 [2024-11-21 00:09:54.490989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.332 [2024-11-21 00:09:54.491060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.332 [2024-11-21 00:09:54.491072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:04.332 [2024-11-21 00:09:54.491081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:04.332 [2024-11-21 00:09:54.491090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.332 [2024-11-21 00:09:54.491119] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:04.332 [2024-11-21 00:09:54.493823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.332 [2024-11-21 00:09:54.494068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.332 [2024-11-21 00:09:54.494089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.711 ms 00:21:04.332 [2024-11-21 00:09:54.494099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.332 [2024-11-21 00:09:54.494144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.332 [2024-11-21 00:09:54.494163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:04.332 [2024-11-21 00:09:54.494174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:04.332 [2024-11-21 00:09:54.494184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.332 [2024-11-21 00:09:54.494211] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:04.332 [2024-11-21 00:09:54.494244] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:04.332 [2024-11-21 00:09:54.494294] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:04.332 [2024-11-21 00:09:54.494340] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:04.332 [2024-11-21 00:09:54.494452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:04.332 [2024-11-21 00:09:54.494468] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:04.332 [2024-11-21 00:09:54.494481] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:04.332 [2024-11-21 00:09:54.494492] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:04.332 [2024-11-21 00:09:54.494506] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:04.332 [2024-11-21 00:09:54.494516] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:04.332 [2024-11-21 00:09:54.494524] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:04.332 [2024-11-21 00:09:54.494534] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:04.332 [2024-11-21 00:09:54.494544] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:04.332 [2024-11-21 00:09:54.494554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.332 [2024-11-21 00:09:54.494563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:04.332 [2024-11-21 00:09:54.494571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:21:04.332 [2024-11-21 00:09:54.494581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.332 [2024-11-21 00:09:54.494669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.332 [2024-11-21 00:09:54.494680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:04.332 [2024-11-21 00:09:54.494692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:04.332 [2024-11-21 00:09:54.494703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.332 [2024-11-21 00:09:54.494808] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:04.332 [2024-11-21 00:09:54.494824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:04.332 [2024-11-21 00:09:54.494834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.332 [2024-11-21 00:09:54.494844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.332 [2024-11-21 00:09:54.494853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:04.332 [2024-11-21 00:09:54.494860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:04.332 [2024-11-21 00:09:54.494867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:04.332 [2024-11-21 00:09:54.494874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:04.332 [2024-11-21 00:09:54.494884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:04.332 [2024-11-21 00:09:54.494892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.332 [2024-11-21 00:09:54.494901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:04.332 [2024-11-21 00:09:54.494908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:04.332 [2024-11-21 00:09:54.494919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.332 [2024-11-21 00:09:54.494927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:04.332 [2024-11-21 00:09:54.494936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:04.332 [2024-11-21 00:09:54.494945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.332 [2024-11-21 00:09:54.494957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:04.332 [2024-11-21 00:09:54.494965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:04.332 [2024-11-21 00:09:54.494974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.332 [2024-11-21 00:09:54.494981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:04.332 [2024-11-21 00:09:54.494989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:04.332 [2024-11-21 00:09:54.494996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.332 [2024-11-21 00:09:54.495003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:04.332 [2024-11-21 00:09:54.495010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:04.332 [2024-11-21 00:09:54.495017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.332 [2024-11-21 00:09:54.495024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:04.332 [2024-11-21 00:09:54.495032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:04.332 [2024-11-21 00:09:54.495039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.332 [2024-11-21 00:09:54.495053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:04.332 [2024-11-21 00:09:54.495061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:04.332 [2024-11-21 00:09:54.495067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.332 [2024-11-21 00:09:54.495074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:04.332 [2024-11-21 00:09:54.495083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:04.332 [2024-11-21 00:09:54.495091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.332 [2024-11-21 00:09:54.495098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:04.332 [2024-11-21 00:09:54.495105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:04.332 [2024-11-21 00:09:54.495112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.332 [2024-11-21 00:09:54.495119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:04.332 [2024-11-21 00:09:54.495126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:04.332 [2024-11-21 00:09:54.495135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.332 [2024-11-21 00:09:54.495142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:04.332 [2024-11-21 00:09:54.495148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:04.332 [2024-11-21 00:09:54.495155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.332 [2024-11-21 00:09:54.495162] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:04.332 [2024-11-21 00:09:54.495173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:04.332 [2024-11-21 00:09:54.495182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.332 [2024-11-21 00:09:54.495192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.332 [2024-11-21 00:09:54.495200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:04.332 [2024-11-21 00:09:54.495209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:04.332 [2024-11-21 00:09:54.495216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:04.332 [2024-11-21 00:09:54.495225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:04.332 [2024-11-21 00:09:54.495232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:04.332 [2024-11-21 00:09:54.495240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:04.333 [2024-11-21 00:09:54.495249] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:04.333 [2024-11-21 00:09:54.495258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.333 [2024-11-21 00:09:54.495268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:04.333 [2024-11-21 00:09:54.495277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:04.333 [2024-11-21 00:09:54.495284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:04.333 [2024-11-21 00:09:54.495291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:04.333 [2024-11-21 00:09:54.495315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:04.333 [2024-11-21 00:09:54.495326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:04.333 [2024-11-21 00:09:54.495334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:04.333 [2024-11-21 00:09:54.495343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:04.333 [2024-11-21 00:09:54.495351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:04.333 [2024-11-21 00:09:54.495364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:04.333 [2024-11-21 00:09:54.495372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:04.333 [2024-11-21 00:09:54.495379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:04.333 [2024-11-21 00:09:54.495387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:04.333 [2024-11-21 00:09:54.495396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:04.333 [2024-11-21 00:09:54.495405] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:04.333 [2024-11-21 00:09:54.495415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.333 [2024-11-21 00:09:54.495424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:04.333 [2024-11-21 00:09:54.495435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:04.333 [2024-11-21 00:09:54.495444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:04.333 [2024-11-21 00:09:54.495462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:04.333 [2024-11-21 00:09:54.495470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.495481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:04.333 [2024-11-21 00:09:54.495491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:21:04.333 [2024-11-21 00:09:54.495499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.523263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.523357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:04.333 [2024-11-21 00:09:54.523382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.695 ms 00:21:04.333 [2024-11-21 00:09:54.523394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.523516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.523530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:04.333 [2024-11-21 00:09:54.523548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:21:04.333 [2024-11-21 00:09:54.523559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.539673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.539720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:04.333 [2024-11-21 00:09:54.539732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.026 ms 00:21:04.333 [2024-11-21 00:09:54.539742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.539783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.539793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:04.333 [2024-11-21 00:09:54.539802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:04.333 [2024-11-21 00:09:54.539810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.540561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.540600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:04.333 [2024-11-21 00:09:54.540613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:21:04.333 [2024-11-21 00:09:54.540623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.540792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.540813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:04.333 [2024-11-21 00:09:54.540822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:21:04.333 [2024-11-21 00:09:54.540831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.550415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.550655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:04.333 [2024-11-21 00:09:54.550682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.557 ms 00:21:04.333 [2024-11-21 00:09:54.550693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.555390] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:04.333 [2024-11-21 00:09:54.555443] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:04.333 [2024-11-21 00:09:54.555458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.555467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:04.333 [2024-11-21 00:09:54.555477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.639 ms 00:21:04.333 [2024-11-21 00:09:54.555486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.571860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.571914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:04.333 [2024-11-21 00:09:54.571930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.318 ms 00:21:04.333 [2024-11-21 00:09:54.571940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.574916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.574960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:04.333 [2024-11-21 00:09:54.574971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.920 ms 00:21:04.333 [2024-11-21 00:09:54.574980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.577648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.577855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:04.333 [2024-11-21 00:09:54.577877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.619 ms 00:21:04.333 [2024-11-21 00:09:54.577887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.578271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.578289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:04.333 [2024-11-21 00:09:54.578326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:21:04.333 [2024-11-21 00:09:54.578335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.609178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.609415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:04.333 [2024-11-21 00:09:54.609438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.819 ms 00:21:04.333 [2024-11-21 00:09:54.609449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.618358] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:04.333 [2024-11-21 00:09:54.622019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.622063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:04.333 [2024-11-21 00:09:54.622082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.515 ms 00:21:04.333 [2024-11-21 00:09:54.622097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.622181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.622193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:04.333 [2024-11-21 00:09:54.622204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:04.333 [2024-11-21 00:09:54.622214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.622323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.622337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:04.333 [2024-11-21 00:09:54.622354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:04.333 [2024-11-21 00:09:54.622366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.622405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.622414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:04.333 [2024-11-21 00:09:54.622424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:04.333 [2024-11-21 00:09:54.622434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.333 [2024-11-21 00:09:54.622484] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:04.333 [2024-11-21 00:09:54.622500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.333 [2024-11-21 00:09:54.622509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:04.333 [2024-11-21 00:09:54.622519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:04.333 [2024-11-21 00:09:54.622527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.334 [2024-11-21 00:09:54.629054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.334 [2024-11-21 00:09:54.629267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:04.334 [2024-11-21 00:09:54.629289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.502 ms 00:21:04.334 [2024-11-21 00:09:54.629321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.334 [2024-11-21 00:09:54.629408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.334 [2024-11-21 00:09:54.629420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:04.334 [2024-11-21 00:09:54.629430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:04.334 [2024-11-21 00:09:54.629442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.334 [2024-11-21 00:09:54.631648] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 159.523 ms, result 0 00:21:05.278  [2024-11-21T00:09:57.079Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-21T00:09:57.649Z] Copying: 22/1024 [MB] (11 MBps) [2024-11-21T00:09:59.024Z] Copying: 34/1024 [MB] (12 MBps) [2024-11-21T00:09:59.957Z] Copying: 45/1024 [MB] (10 MBps) [2024-11-21T00:10:00.945Z] Copying: 56/1024 [MB] (11 MBps) [2024-11-21T00:10:01.897Z] Copying: 71/1024 [MB] (14 MBps) [2024-11-21T00:10:02.841Z] Copying: 83/1024 [MB] (12 MBps) [2024-11-21T00:10:03.781Z] Copying: 96/1024 [MB] (12 MBps) [2024-11-21T00:10:04.716Z] Copying: 107/1024 [MB] (11 MBps) [2024-11-21T00:10:05.653Z] Copying: 127/1024 [MB] (20 MBps) [2024-11-21T00:10:07.027Z] Copying: 138/1024 [MB] (11 MBps) [2024-11-21T00:10:07.964Z] Copying: 150/1024 [MB] (11 MBps) [2024-11-21T00:10:08.899Z] Copying: 162/1024 [MB] (11 MBps) [2024-11-21T00:10:09.832Z] Copying: 175/1024 [MB] (13 MBps) [2024-11-21T00:10:10.766Z] Copying: 186/1024 [MB] (11 MBps) [2024-11-21T00:10:11.703Z] Copying: 198/1024 [MB] (11 MBps) [2024-11-21T00:10:13.078Z] Copying: 210/1024 [MB] (11 MBps) [2024-11-21T00:10:13.644Z] Copying: 221/1024 [MB] (11 MBps) [2024-11-21T00:10:15.019Z] Copying: 234/1024 [MB] (12 MBps) [2024-11-21T00:10:15.956Z] Copying: 245/1024 [MB] (11 MBps) [2024-11-21T00:10:16.894Z] Copying: 257/1024 [MB] (11 MBps) [2024-11-21T00:10:17.827Z] Copying: 269/1024 [MB] (12 MBps) [2024-11-21T00:10:18.760Z] Copying: 281/1024 [MB] (11 MBps) [2024-11-21T00:10:19.692Z] Copying: 292/1024 [MB] (11 MBps) [2024-11-21T00:10:21.068Z] Copying: 304/1024 [MB] (11 MBps) [2024-11-21T00:10:22.002Z] Copying: 316/1024 [MB] (11 MBps) [2024-11-21T00:10:22.939Z] Copying: 328/1024 [MB] (12 MBps) [2024-11-21T00:10:23.873Z] Copying: 340/1024 [MB] (11 MBps) [2024-11-21T00:10:24.809Z] Copying: 351/1024 [MB] (11 MBps) [2024-11-21T00:10:25.745Z] Copying: 363/1024 [MB] (11 MBps) [2024-11-21T00:10:26.681Z] Copying: 374/1024 [MB] (11 MBps) [2024-11-21T00:10:28.058Z] Copying: 386/1024 [MB] (11 MBps) [2024-11-21T00:10:28.993Z] Copying: 401/1024 [MB] (15 MBps) [2024-11-21T00:10:29.984Z] Copying: 415/1024 [MB] (14 MBps) [2024-11-21T00:10:30.920Z] Copying: 427/1024 [MB] (11 MBps) [2024-11-21T00:10:31.856Z] Copying: 443/1024 [MB] (15 MBps) [2024-11-21T00:10:32.791Z] Copying: 457/1024 [MB] (14 MBps) [2024-11-21T00:10:33.725Z] Copying: 471/1024 [MB] (13 MBps) [2024-11-21T00:10:34.661Z] Copying: 483/1024 [MB] (11 MBps) [2024-11-21T00:10:36.036Z] Copying: 500/1024 [MB] (17 MBps) [2024-11-21T00:10:36.977Z] Copying: 512/1024 [MB] (11 MBps) [2024-11-21T00:10:37.917Z] Copying: 527/1024 [MB] (15 MBps) [2024-11-21T00:10:38.862Z] Copying: 538/1024 [MB] (10 MBps) [2024-11-21T00:10:39.803Z] Copying: 548/1024 [MB] (10 MBps) [2024-11-21T00:10:40.737Z] Copying: 571576/1048576 [kB] (9992 kBps) [2024-11-21T00:10:41.673Z] Copying: 568/1024 [MB] (10 MBps) [2024-11-21T00:10:43.051Z] Copying: 583/1024 [MB] (14 MBps) [2024-11-21T00:10:44.001Z] Copying: 598/1024 [MB] (15 MBps) [2024-11-21T00:10:44.944Z] Copying: 623312/1048576 [kB] (10200 kBps) [2024-11-21T00:10:45.881Z] Copying: 618/1024 [MB] (10 MBps) [2024-11-21T00:10:46.816Z] Copying: 629/1024 [MB] (11 MBps) [2024-11-21T00:10:47.752Z] Copying: 648/1024 [MB] (18 MBps) [2024-11-21T00:10:48.687Z] Copying: 659/1024 [MB] (11 MBps) [2024-11-21T00:10:50.062Z] Copying: 671/1024 [MB] (11 MBps) [2024-11-21T00:10:50.997Z] Copying: 685/1024 [MB] (13 MBps) [2024-11-21T00:10:51.931Z] Copying: 700/1024 [MB] (15 MBps) [2024-11-21T00:10:52.866Z] Copying: 722/1024 [MB] (21 MBps) [2024-11-21T00:10:53.801Z] Copying: 733/1024 [MB] (11 MBps) [2024-11-21T00:10:54.736Z] Copying: 745/1024 [MB] (11 MBps) [2024-11-21T00:10:55.671Z] Copying: 757/1024 [MB] (11 MBps) [2024-11-21T00:10:57.047Z] Copying: 768/1024 [MB] (11 MBps) [2024-11-21T00:10:57.983Z] Copying: 780/1024 [MB] (11 MBps) [2024-11-21T00:10:58.957Z] Copying: 792/1024 [MB] (11 MBps) [2024-11-21T00:10:59.890Z] Copying: 804/1024 [MB] (12 MBps) [2024-11-21T00:11:00.829Z] Copying: 823/1024 [MB] (18 MBps) [2024-11-21T00:11:01.763Z] Copying: 834/1024 [MB] (11 MBps) [2024-11-21T00:11:02.698Z] Copying: 847/1024 [MB] (13 MBps) [2024-11-21T00:11:04.070Z] Copying: 861/1024 [MB] (13 MBps) [2024-11-21T00:11:05.004Z] Copying: 872/1024 [MB] (11 MBps) [2024-11-21T00:11:05.938Z] Copying: 884/1024 [MB] (11 MBps) [2024-11-21T00:11:06.873Z] Copying: 895/1024 [MB] (11 MBps) [2024-11-21T00:11:07.808Z] Copying: 907/1024 [MB] (12 MBps) [2024-11-21T00:11:08.742Z] Copying: 919/1024 [MB] (11 MBps) [2024-11-21T00:11:09.689Z] Copying: 931/1024 [MB] (11 MBps) [2024-11-21T00:11:11.062Z] Copying: 947/1024 [MB] (16 MBps) [2024-11-21T00:11:11.997Z] Copying: 959/1024 [MB] (11 MBps) [2024-11-21T00:11:12.932Z] Copying: 971/1024 [MB] (11 MBps) [2024-11-21T00:11:13.866Z] Copying: 982/1024 [MB] (11 MBps) [2024-11-21T00:11:14.798Z] Copying: 994/1024 [MB] (11 MBps) [2024-11-21T00:11:15.732Z] Copying: 1006/1024 [MB] (11 MBps) [2024-11-21T00:11:16.668Z] Copying: 1019/1024 [MB] (12 MBps) [2024-11-21T00:11:16.669Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-21 00:11:16.561164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.248 [2024-11-21 00:11:16.561220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:26.248 [2024-11-21 00:11:16.561235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:26.248 [2024-11-21 00:11:16.561244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.248 [2024-11-21 00:11:16.562347] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:26.248 [2024-11-21 00:11:16.563422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.248 [2024-11-21 00:11:16.563455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:26.248 [2024-11-21 00:11:16.563472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:22:26.248 [2024-11-21 00:11:16.563486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.248 [2024-11-21 00:11:16.577790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.248 [2024-11-21 00:11:16.577961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:26.248 [2024-11-21 00:11:16.577980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.277 ms 00:22:26.248 [2024-11-21 00:11:16.577990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.248 [2024-11-21 00:11:16.600683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.248 [2024-11-21 00:11:16.600813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:26.248 [2024-11-21 00:11:16.600932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.674 ms 00:22:26.248 [2024-11-21 00:11:16.600963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.248 [2024-11-21 00:11:16.607111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.248 [2024-11-21 00:11:16.607218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:26.248 [2024-11-21 00:11:16.607269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.102 ms 00:22:26.248 [2024-11-21 00:11:16.607307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.248 [2024-11-21 00:11:16.609662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.248 [2024-11-21 00:11:16.609766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:26.248 [2024-11-21 00:11:16.609815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.294 ms 00:22:26.248 [2024-11-21 00:11:16.609836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.248 [2024-11-21 00:11:16.614005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.248 [2024-11-21 00:11:16.614109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:26.248 [2024-11-21 00:11:16.614158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.129 ms 00:22:26.248 [2024-11-21 00:11:16.614180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.511 [2024-11-21 00:11:16.843581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.511 [2024-11-21 00:11:16.843788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:26.511 [2024-11-21 00:11:16.843970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 229.359 ms 00:22:26.511 [2024-11-21 00:11:16.844015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.511 [2024-11-21 00:11:16.847629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.511 [2024-11-21 00:11:16.847788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:26.511 [2024-11-21 00:11:16.847853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.572 ms 00:22:26.511 [2024-11-21 00:11:16.847877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.511 [2024-11-21 00:11:16.850312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.511 [2024-11-21 00:11:16.850461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:26.511 [2024-11-21 00:11:16.850527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.387 ms 00:22:26.511 [2024-11-21 00:11:16.850548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.511 [2024-11-21 00:11:16.852736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.511 [2024-11-21 00:11:16.852901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:26.511 [2024-11-21 00:11:16.852959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:22:26.511 [2024-11-21 00:11:16.852980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.511 [2024-11-21 00:11:16.855210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.511 [2024-11-21 00:11:16.855393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:26.511 [2024-11-21 00:11:16.855564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.062 ms 00:22:26.511 [2024-11-21 00:11:16.855949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.511 [2024-11-21 00:11:16.856098] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:26.511 [2024-11-21 00:11:16.856472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 80640 / 261120 wr_cnt: 1 state: open 00:22:26.511 [2024-11-21 00:11:16.856503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:26.511 [2024-11-21 00:11:16.856747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.856992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:26.512 [2024-11-21 00:11:16.857824] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:26.512 [2024-11-21 00:11:16.857845] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5652dacb-6cf0-4515-b30c-e9fc0a790556 00:22:26.512 [2024-11-21 00:11:16.857876] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 80640 00:22:26.512 [2024-11-21 00:11:16.857896] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 81600 00:22:26.512 [2024-11-21 00:11:16.857914] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 80640 00:22:26.512 [2024-11-21 00:11:16.857947] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0119 00:22:26.512 [2024-11-21 00:11:16.857965] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:26.512 [2024-11-21 00:11:16.857986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:26.512 [2024-11-21 00:11:16.858004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:26.512 [2024-11-21 00:11:16.858023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:26.512 [2024-11-21 00:11:16.858040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:26.512 [2024-11-21 00:11:16.858063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.512 [2024-11-21 00:11:16.858123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:26.512 [2024-11-21 00:11:16.858147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.966 ms 00:22:26.512 [2024-11-21 00:11:16.858193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.512 [2024-11-21 00:11:16.861292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.512 [2024-11-21 00:11:16.861441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:26.512 [2024-11-21 00:11:16.861454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.052 ms 00:22:26.512 [2024-11-21 00:11:16.861462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.512 [2024-11-21 00:11:16.861643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.512 [2024-11-21 00:11:16.861665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:26.512 [2024-11-21 00:11:16.861676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:22:26.512 [2024-11-21 00:11:16.861685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.512 [2024-11-21 00:11:16.870738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.512 [2024-11-21 00:11:16.870788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:26.512 [2024-11-21 00:11:16.870800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.513 [2024-11-21 00:11:16.870809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.513 [2024-11-21 00:11:16.870873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.513 [2024-11-21 00:11:16.870882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:26.513 [2024-11-21 00:11:16.870891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.513 [2024-11-21 00:11:16.870899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.513 [2024-11-21 00:11:16.870971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.513 [2024-11-21 00:11:16.870988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:26.513 [2024-11-21 00:11:16.870997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.513 [2024-11-21 00:11:16.871005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.513 [2024-11-21 00:11:16.871022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.513 [2024-11-21 00:11:16.871031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:26.513 [2024-11-21 00:11:16.871039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.513 [2024-11-21 00:11:16.871047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.513 [2024-11-21 00:11:16.890262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.513 [2024-11-21 00:11:16.890363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:26.513 [2024-11-21 00:11:16.890376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.513 [2024-11-21 00:11:16.890386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.513 [2024-11-21 00:11:16.905770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.513 [2024-11-21 00:11:16.905831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:26.513 [2024-11-21 00:11:16.905844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.513 [2024-11-21 00:11:16.905854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.513 [2024-11-21 00:11:16.905972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.513 [2024-11-21 00:11:16.905983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:26.513 [2024-11-21 00:11:16.905998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.513 [2024-11-21 00:11:16.906009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.513 [2024-11-21 00:11:16.906052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.513 [2024-11-21 00:11:16.906063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:26.513 [2024-11-21 00:11:16.906073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.513 [2024-11-21 00:11:16.906081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.513 [2024-11-21 00:11:16.906166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.513 [2024-11-21 00:11:16.906177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:26.513 [2024-11-21 00:11:16.906189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.513 [2024-11-21 00:11:16.906201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.513 [2024-11-21 00:11:16.906237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.513 [2024-11-21 00:11:16.906247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:26.513 [2024-11-21 00:11:16.906257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.513 [2024-11-21 00:11:16.906265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.513 [2024-11-21 00:11:16.906379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.513 [2024-11-21 00:11:16.906394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:26.513 [2024-11-21 00:11:16.906404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.513 [2024-11-21 00:11:16.906417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.513 [2024-11-21 00:11:16.906480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:26.513 [2024-11-21 00:11:16.906495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:26.513 [2024-11-21 00:11:16.906506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:26.513 [2024-11-21 00:11:16.906515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.513 [2024-11-21 00:11:16.906685] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.840 ms, result 0 00:22:27.457 00:22:27.457 00:22:27.457 00:11:17 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:27.457 [2024-11-21 00:11:17.747825] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:27.457 [2024-11-21 00:11:17.747993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88824 ] 00:22:27.717 [2024-11-21 00:11:17.883822] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.717 [2024-11-21 00:11:17.955860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.717 [2024-11-21 00:11:18.103913] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:27.717 [2024-11-21 00:11:18.104014] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:27.979 [2024-11-21 00:11:18.267468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.979 [2024-11-21 00:11:18.267532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:27.979 [2024-11-21 00:11:18.267552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:27.979 [2024-11-21 00:11:18.267561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.979 [2024-11-21 00:11:18.267619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.979 [2024-11-21 00:11:18.267630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:27.979 [2024-11-21 00:11:18.267640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:27.979 [2024-11-21 00:11:18.267656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.979 [2024-11-21 00:11:18.267681] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:27.979 [2024-11-21 00:11:18.267962] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:27.979 [2024-11-21 00:11:18.267981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.979 [2024-11-21 00:11:18.267989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:27.979 [2024-11-21 00:11:18.267999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:22:27.979 [2024-11-21 00:11:18.268008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.979 [2024-11-21 00:11:18.270270] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:27.979 [2024-11-21 00:11:18.274992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.979 [2024-11-21 00:11:18.275044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:27.979 [2024-11-21 00:11:18.275062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.724 ms 00:22:27.979 [2024-11-21 00:11:18.275071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.979 [2024-11-21 00:11:18.275151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.979 [2024-11-21 00:11:18.275165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:27.979 [2024-11-21 00:11:18.275174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:27.979 [2024-11-21 00:11:18.275187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.979 [2024-11-21 00:11:18.286645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.979 [2024-11-21 00:11:18.286690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:27.979 [2024-11-21 00:11:18.286713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.414 ms 00:22:27.979 [2024-11-21 00:11:18.286722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.979 [2024-11-21 00:11:18.286840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.979 [2024-11-21 00:11:18.286851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:27.979 [2024-11-21 00:11:18.286860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:27.979 [2024-11-21 00:11:18.286868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.979 [2024-11-21 00:11:18.286930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.979 [2024-11-21 00:11:18.286942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:27.979 [2024-11-21 00:11:18.286952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:27.979 [2024-11-21 00:11:18.286966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.979 [2024-11-21 00:11:18.286991] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:27.979 [2024-11-21 00:11:18.289689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.979 [2024-11-21 00:11:18.289963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:27.979 [2024-11-21 00:11:18.289983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.705 ms 00:22:27.979 [2024-11-21 00:11:18.289993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.979 [2024-11-21 00:11:18.290041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.979 [2024-11-21 00:11:18.290051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:27.979 [2024-11-21 00:11:18.290060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:27.979 [2024-11-21 00:11:18.290068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.979 [2024-11-21 00:11:18.290098] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:27.979 [2024-11-21 00:11:18.290130] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:27.979 [2024-11-21 00:11:18.290178] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:27.979 [2024-11-21 00:11:18.290196] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:27.979 [2024-11-21 00:11:18.290342] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:27.979 [2024-11-21 00:11:18.290356] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:27.979 [2024-11-21 00:11:18.290370] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:27.979 [2024-11-21 00:11:18.290381] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:27.979 [2024-11-21 00:11:18.290399] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:27.979 [2024-11-21 00:11:18.290408] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:27.979 [2024-11-21 00:11:18.290416] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:27.979 [2024-11-21 00:11:18.290428] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:27.979 [2024-11-21 00:11:18.290441] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:27.979 [2024-11-21 00:11:18.290452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.979 [2024-11-21 00:11:18.290465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:27.980 [2024-11-21 00:11:18.290476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:22:27.980 [2024-11-21 00:11:18.290486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.980 [2024-11-21 00:11:18.290570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.980 [2024-11-21 00:11:18.290582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:27.980 [2024-11-21 00:11:18.290594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:27.980 [2024-11-21 00:11:18.290602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.980 [2024-11-21 00:11:18.290712] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:27.980 [2024-11-21 00:11:18.290726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:27.980 [2024-11-21 00:11:18.290735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:27.980 [2024-11-21 00:11:18.290749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.980 [2024-11-21 00:11:18.290760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:27.980 [2024-11-21 00:11:18.290768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:27.980 [2024-11-21 00:11:18.290776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:27.980 [2024-11-21 00:11:18.290788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:27.980 [2024-11-21 00:11:18.290798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:27.980 [2024-11-21 00:11:18.290808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:27.980 [2024-11-21 00:11:18.290819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:27.980 [2024-11-21 00:11:18.290828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:27.980 [2024-11-21 00:11:18.290836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:27.980 [2024-11-21 00:11:18.290845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:27.980 [2024-11-21 00:11:18.290854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:27.980 [2024-11-21 00:11:18.290867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.980 [2024-11-21 00:11:18.290877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:27.980 [2024-11-21 00:11:18.290885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:27.980 [2024-11-21 00:11:18.290892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.980 [2024-11-21 00:11:18.290899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:27.980 [2024-11-21 00:11:18.290907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:27.980 [2024-11-21 00:11:18.290914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.980 [2024-11-21 00:11:18.290922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:27.980 [2024-11-21 00:11:18.290931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:27.980 [2024-11-21 00:11:18.290938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.980 [2024-11-21 00:11:18.290945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:27.980 [2024-11-21 00:11:18.290953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:27.980 [2024-11-21 00:11:18.290960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.980 [2024-11-21 00:11:18.290967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:27.980 [2024-11-21 00:11:18.290974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:27.980 [2024-11-21 00:11:18.290981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.980 [2024-11-21 00:11:18.290988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:27.980 [2024-11-21 00:11:18.290996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:27.980 [2024-11-21 00:11:18.291003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:27.980 [2024-11-21 00:11:18.291009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:27.980 [2024-11-21 00:11:18.291018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:27.980 [2024-11-21 00:11:18.291025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:27.980 [2024-11-21 00:11:18.291032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:27.980 [2024-11-21 00:11:18.291039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:27.980 [2024-11-21 00:11:18.291054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.980 [2024-11-21 00:11:18.291063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:27.980 [2024-11-21 00:11:18.291069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:27.980 [2024-11-21 00:11:18.291076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.980 [2024-11-21 00:11:18.291083] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:27.980 [2024-11-21 00:11:18.291091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:27.980 [2024-11-21 00:11:18.291099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:27.980 [2024-11-21 00:11:18.291114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.980 [2024-11-21 00:11:18.291126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:27.980 [2024-11-21 00:11:18.291134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:27.980 [2024-11-21 00:11:18.291142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:27.980 [2024-11-21 00:11:18.291149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:27.980 [2024-11-21 00:11:18.291159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:27.980 [2024-11-21 00:11:18.291167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:27.980 [2024-11-21 00:11:18.291176] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:27.980 [2024-11-21 00:11:18.291191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:27.980 [2024-11-21 00:11:18.291204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:27.980 [2024-11-21 00:11:18.291214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:27.980 [2024-11-21 00:11:18.291222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:27.980 [2024-11-21 00:11:18.291232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:27.980 [2024-11-21 00:11:18.291240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:27.980 [2024-11-21 00:11:18.291250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:27.980 [2024-11-21 00:11:18.291258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:27.980 [2024-11-21 00:11:18.291266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:27.980 [2024-11-21 00:11:18.291273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:27.980 [2024-11-21 00:11:18.291288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:27.980 [2024-11-21 00:11:18.291341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:27.980 [2024-11-21 00:11:18.291350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:27.980 [2024-11-21 00:11:18.291357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:27.980 [2024-11-21 00:11:18.291365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:27.980 [2024-11-21 00:11:18.291373] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:27.980 [2024-11-21 00:11:18.291387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:27.980 [2024-11-21 00:11:18.291399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:27.980 [2024-11-21 00:11:18.291409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:27.980 [2024-11-21 00:11:18.291417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:27.980 [2024-11-21 00:11:18.291425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:27.980 [2024-11-21 00:11:18.291434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.980 [2024-11-21 00:11:18.291442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:27.980 [2024-11-21 00:11:18.291454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:22:27.980 [2024-11-21 00:11:18.291462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.980 [2024-11-21 00:11:18.319243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.980 [2024-11-21 00:11:18.319333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:27.980 [2024-11-21 00:11:18.319350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.713 ms 00:22:27.980 [2024-11-21 00:11:18.319362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.980 [2024-11-21 00:11:18.319488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.980 [2024-11-21 00:11:18.319542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:27.980 [2024-11-21 00:11:18.319556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:22:27.980 [2024-11-21 00:11:18.319568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.980 [2024-11-21 00:11:18.335505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.980 [2024-11-21 00:11:18.335552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:27.980 [2024-11-21 00:11:18.335565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.854 ms 00:22:27.980 [2024-11-21 00:11:18.335574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.980 [2024-11-21 00:11:18.335617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.980 [2024-11-21 00:11:18.335626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:27.980 [2024-11-21 00:11:18.335636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:27.981 [2024-11-21 00:11:18.335646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.981 [2024-11-21 00:11:18.336376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.981 [2024-11-21 00:11:18.336415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:27.981 [2024-11-21 00:11:18.336428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:22:27.981 [2024-11-21 00:11:18.336438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.981 [2024-11-21 00:11:18.336601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.981 [2024-11-21 00:11:18.336621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:27.981 [2024-11-21 00:11:18.336636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:22:27.981 [2024-11-21 00:11:18.336644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.981 [2024-11-21 00:11:18.346162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.981 [2024-11-21 00:11:18.346212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:27.981 [2024-11-21 00:11:18.346232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.493 ms 00:22:27.981 [2024-11-21 00:11:18.346242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.981 [2024-11-21 00:11:18.351086] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:27.981 [2024-11-21 00:11:18.351139] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:27.981 [2024-11-21 00:11:18.351153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.981 [2024-11-21 00:11:18.351162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:27.981 [2024-11-21 00:11:18.351172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.753 ms 00:22:27.981 [2024-11-21 00:11:18.351181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.981 [2024-11-21 00:11:18.367451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.981 [2024-11-21 00:11:18.367497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:27.981 [2024-11-21 00:11:18.367517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.215 ms 00:22:27.981 [2024-11-21 00:11:18.367526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.981 [2024-11-21 00:11:18.370397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.981 [2024-11-21 00:11:18.370439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:27.981 [2024-11-21 00:11:18.370450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.817 ms 00:22:27.981 [2024-11-21 00:11:18.370458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.981 [2024-11-21 00:11:18.372968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.981 [2024-11-21 00:11:18.373014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:27.981 [2024-11-21 00:11:18.373025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.457 ms 00:22:27.981 [2024-11-21 00:11:18.373034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.981 [2024-11-21 00:11:18.373433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.981 [2024-11-21 00:11:18.373476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:27.981 [2024-11-21 00:11:18.373488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:22:27.981 [2024-11-21 00:11:18.373496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.242 [2024-11-21 00:11:18.404769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.242 [2024-11-21 00:11:18.405037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:28.242 [2024-11-21 00:11:18.405060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.253 ms 00:22:28.242 [2024-11-21 00:11:18.405070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.242 [2024-11-21 00:11:18.414410] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:28.242 [2024-11-21 00:11:18.418233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.242 [2024-11-21 00:11:18.418281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:28.242 [2024-11-21 00:11:18.418326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.708 ms 00:22:28.242 [2024-11-21 00:11:18.418335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.242 [2024-11-21 00:11:18.418434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.242 [2024-11-21 00:11:18.418450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:28.242 [2024-11-21 00:11:18.418466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:28.242 [2024-11-21 00:11:18.418476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.242 [2024-11-21 00:11:18.420474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.242 [2024-11-21 00:11:18.420521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:28.242 [2024-11-21 00:11:18.420539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.958 ms 00:22:28.242 [2024-11-21 00:11:18.420550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.242 [2024-11-21 00:11:18.420587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.242 [2024-11-21 00:11:18.420610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:28.242 [2024-11-21 00:11:18.420619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:28.242 [2024-11-21 00:11:18.420628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.242 [2024-11-21 00:11:18.420676] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:28.242 [2024-11-21 00:11:18.420689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.242 [2024-11-21 00:11:18.420698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:28.242 [2024-11-21 00:11:18.420707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:28.242 [2024-11-21 00:11:18.420717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.242 [2024-11-21 00:11:18.427071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.242 [2024-11-21 00:11:18.427122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:28.242 [2024-11-21 00:11:18.427135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.327 ms 00:22:28.242 [2024-11-21 00:11:18.427145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.242 [2024-11-21 00:11:18.427239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.242 [2024-11-21 00:11:18.427255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:28.242 [2024-11-21 00:11:18.427265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:28.242 [2024-11-21 00:11:18.427274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.242 [2024-11-21 00:11:18.429131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 161.081 ms, result 0 00:22:29.627  [2024-11-21T00:11:20.989Z] Copying: 6280/1048576 [kB] (6280 kBps) [2024-11-21T00:11:21.927Z] Copying: 17/1024 [MB] (10 MBps) [2024-11-21T00:11:22.870Z] Copying: 27/1024 [MB] (10 MBps) [2024-11-21T00:11:23.806Z] Copying: 38/1024 [MB] (10 MBps) [2024-11-21T00:11:24.741Z] Copying: 49/1024 [MB] (11 MBps) [2024-11-21T00:11:25.681Z] Copying: 61/1024 [MB] (11 MBps) [2024-11-21T00:11:26.671Z] Copying: 72/1024 [MB] (11 MBps) [2024-11-21T00:11:28.056Z] Copying: 83/1024 [MB] (10 MBps) [2024-11-21T00:11:28.991Z] Copying: 94/1024 [MB] (11 MBps) [2024-11-21T00:11:29.923Z] Copying: 106/1024 [MB] (11 MBps) [2024-11-21T00:11:30.867Z] Copying: 118/1024 [MB] (11 MBps) [2024-11-21T00:11:31.812Z] Copying: 129/1024 [MB] (11 MBps) [2024-11-21T00:11:32.754Z] Copying: 140/1024 [MB] (10 MBps) [2024-11-21T00:11:33.688Z] Copying: 150/1024 [MB] (10 MBps) [2024-11-21T00:11:35.063Z] Copying: 162/1024 [MB] (11 MBps) [2024-11-21T00:11:35.631Z] Copying: 174/1024 [MB] (11 MBps) [2024-11-21T00:11:37.006Z] Copying: 186/1024 [MB] (11 MBps) [2024-11-21T00:11:37.941Z] Copying: 197/1024 [MB] (11 MBps) [2024-11-21T00:11:38.875Z] Copying: 210/1024 [MB] (12 MBps) [2024-11-21T00:11:39.807Z] Copying: 222/1024 [MB] (12 MBps) [2024-11-21T00:11:40.743Z] Copying: 234/1024 [MB] (11 MBps) [2024-11-21T00:11:41.683Z] Copying: 246/1024 [MB] (11 MBps) [2024-11-21T00:11:43.067Z] Copying: 256/1024 [MB] (10 MBps) [2024-11-21T00:11:43.634Z] Copying: 267/1024 [MB] (10 MBps) [2024-11-21T00:11:45.019Z] Copying: 281/1024 [MB] (13 MBps) [2024-11-21T00:11:45.960Z] Copying: 303/1024 [MB] (22 MBps) [2024-11-21T00:11:46.903Z] Copying: 323/1024 [MB] (19 MBps) [2024-11-21T00:11:47.841Z] Copying: 338/1024 [MB] (15 MBps) [2024-11-21T00:11:48.779Z] Copying: 349/1024 [MB] (10 MBps) [2024-11-21T00:11:49.732Z] Copying: 364/1024 [MB] (14 MBps) [2024-11-21T00:11:50.665Z] Copying: 374/1024 [MB] (10 MBps) [2024-11-21T00:11:52.039Z] Copying: 386/1024 [MB] (11 MBps) [2024-11-21T00:11:52.974Z] Copying: 397/1024 [MB] (11 MBps) [2024-11-21T00:11:53.917Z] Copying: 409/1024 [MB] (11 MBps) [2024-11-21T00:11:54.852Z] Copying: 421/1024 [MB] (11 MBps) [2024-11-21T00:11:55.814Z] Copying: 433/1024 [MB] (11 MBps) [2024-11-21T00:11:56.749Z] Copying: 444/1024 [MB] (11 MBps) [2024-11-21T00:11:57.685Z] Copying: 455/1024 [MB] (11 MBps) [2024-11-21T00:11:59.067Z] Copying: 467/1024 [MB] (11 MBps) [2024-11-21T00:11:59.655Z] Copying: 479/1024 [MB] (11 MBps) [2024-11-21T00:12:01.042Z] Copying: 489/1024 [MB] (10 MBps) [2024-11-21T00:12:01.975Z] Copying: 499/1024 [MB] (10 MBps) [2024-11-21T00:12:02.909Z] Copying: 511/1024 [MB] (11 MBps) [2024-11-21T00:12:03.846Z] Copying: 524/1024 [MB] (12 MBps) [2024-11-21T00:12:04.786Z] Copying: 537/1024 [MB] (12 MBps) [2024-11-21T00:12:05.726Z] Copying: 547/1024 [MB] (10 MBps) [2024-11-21T00:12:06.662Z] Copying: 559/1024 [MB] (12 MBps) [2024-11-21T00:12:08.038Z] Copying: 571/1024 [MB] (12 MBps) [2024-11-21T00:12:08.973Z] Copying: 584/1024 [MB] (12 MBps) [2024-11-21T00:12:09.914Z] Copying: 596/1024 [MB] (12 MBps) [2024-11-21T00:12:10.854Z] Copying: 608/1024 [MB] (11 MBps) [2024-11-21T00:12:11.797Z] Copying: 621/1024 [MB] (13 MBps) [2024-11-21T00:12:12.741Z] Copying: 633/1024 [MB] (11 MBps) [2024-11-21T00:12:13.676Z] Copying: 643/1024 [MB] (10 MBps) [2024-11-21T00:12:15.054Z] Copying: 655/1024 [MB] (12 MBps) [2024-11-21T00:12:15.999Z] Copying: 668/1024 [MB] (12 MBps) [2024-11-21T00:12:16.940Z] Copying: 681/1024 [MB] (13 MBps) [2024-11-21T00:12:17.879Z] Copying: 692/1024 [MB] (10 MBps) [2024-11-21T00:12:18.822Z] Copying: 705/1024 [MB] (12 MBps) [2024-11-21T00:12:19.764Z] Copying: 725/1024 [MB] (20 MBps) [2024-11-21T00:12:20.710Z] Copying: 744/1024 [MB] (18 MBps) [2024-11-21T00:12:21.650Z] Copying: 754/1024 [MB] (10 MBps) [2024-11-21T00:12:23.026Z] Copying: 768/1024 [MB] (13 MBps) [2024-11-21T00:12:24.000Z] Copying: 780/1024 [MB] (11 MBps) [2024-11-21T00:12:24.953Z] Copying: 792/1024 [MB] (11 MBps) [2024-11-21T00:12:25.889Z] Copying: 803/1024 [MB] (11 MBps) [2024-11-21T00:12:26.830Z] Copying: 815/1024 [MB] (11 MBps) [2024-11-21T00:12:27.763Z] Copying: 826/1024 [MB] (10 MBps) [2024-11-21T00:12:28.697Z] Copying: 838/1024 [MB] (11 MBps) [2024-11-21T00:12:30.070Z] Copying: 851/1024 [MB] (13 MBps) [2024-11-21T00:12:30.636Z] Copying: 865/1024 [MB] (14 MBps) [2024-11-21T00:12:32.010Z] Copying: 877/1024 [MB] (11 MBps) [2024-11-21T00:12:32.946Z] Copying: 889/1024 [MB] (11 MBps) [2024-11-21T00:12:33.885Z] Copying: 900/1024 [MB] (11 MBps) [2024-11-21T00:12:34.826Z] Copying: 912/1024 [MB] (12 MBps) [2024-11-21T00:12:35.761Z] Copying: 925/1024 [MB] (12 MBps) [2024-11-21T00:12:36.708Z] Copying: 937/1024 [MB] (11 MBps) [2024-11-21T00:12:37.643Z] Copying: 948/1024 [MB] (11 MBps) [2024-11-21T00:12:39.017Z] Copying: 960/1024 [MB] (11 MBps) [2024-11-21T00:12:39.951Z] Copying: 972/1024 [MB] (11 MBps) [2024-11-21T00:12:40.886Z] Copying: 984/1024 [MB] (11 MBps) [2024-11-21T00:12:41.820Z] Copying: 995/1024 [MB] (11 MBps) [2024-11-21T00:12:42.754Z] Copying: 1007/1024 [MB] (11 MBps) [2024-11-21T00:12:43.325Z] Copying: 1019/1024 [MB] (11 MBps) [2024-11-21T00:12:43.325Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-21 00:12:43.288384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.904 [2024-11-21 00:12:43.288528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:52.904 [2024-11-21 00:12:43.288576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:52.904 [2024-11-21 00:12:43.288603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.904 [2024-11-21 00:12:43.288651] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:52.904 [2024-11-21 00:12:43.289785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.904 [2024-11-21 00:12:43.289966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:52.904 [2024-11-21 00:12:43.289992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:23:52.904 [2024-11-21 00:12:43.290009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.904 [2024-11-21 00:12:43.290517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.904 [2024-11-21 00:12:43.290540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:52.904 [2024-11-21 00:12:43.290558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:23:52.904 [2024-11-21 00:12:43.290573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.904 [2024-11-21 00:12:43.299732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.904 [2024-11-21 00:12:43.299905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:52.904 [2024-11-21 00:12:43.299971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.124 ms 00:23:52.904 [2024-11-21 00:12:43.299996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.904 [2024-11-21 00:12:43.307144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.904 [2024-11-21 00:12:43.307367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:52.904 [2024-11-21 00:12:43.307480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.763 ms 00:23:52.904 [2024-11-21 00:12:43.307509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.904 [2024-11-21 00:12:43.310221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.904 [2024-11-21 00:12:43.310398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:52.904 [2024-11-21 00:12:43.310626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.638 ms 00:23:52.904 [2024-11-21 00:12:43.310666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.904 [2024-11-21 00:12:43.317024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.904 [2024-11-21 00:12:43.317185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:52.904 [2024-11-21 00:12:43.317247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.230 ms 00:23:52.904 [2024-11-21 00:12:43.317283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.478 [2024-11-21 00:12:43.674091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.478 [2024-11-21 00:12:43.674277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:53.478 [2024-11-21 00:12:43.674421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 356.732 ms 00:23:53.478 [2024-11-21 00:12:43.674449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.478 [2024-11-21 00:12:43.678269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.478 [2024-11-21 00:12:43.678436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:53.478 [2024-11-21 00:12:43.678497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.782 ms 00:23:53.478 [2024-11-21 00:12:43.678519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.478 [2024-11-21 00:12:43.681684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.478 [2024-11-21 00:12:43.681838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:53.478 [2024-11-21 00:12:43.681895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:23:53.478 [2024-11-21 00:12:43.681918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.478 [2024-11-21 00:12:43.684235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.478 [2024-11-21 00:12:43.684397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:53.478 [2024-11-21 00:12:43.684461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.274 ms 00:23:53.478 [2024-11-21 00:12:43.684484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.478 [2024-11-21 00:12:43.686679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.478 [2024-11-21 00:12:43.686829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:53.478 [2024-11-21 00:12:43.686886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.100 ms 00:23:53.478 [2024-11-21 00:12:43.686909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.478 [2024-11-21 00:12:43.686950] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:53.478 [2024-11-21 00:12:43.686980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:53.478 [2024-11-21 00:12:43.687014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.687991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:53.478 [2024-11-21 00:12:43.688655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.688999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:53.479 [2024-11-21 00:12:43.689149] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:53.479 [2024-11-21 00:12:43.689158] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5652dacb-6cf0-4515-b30c-e9fc0a790556 00:23:53.479 [2024-11-21 00:12:43.689172] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:53.479 [2024-11-21 00:12:43.689185] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 51392 00:23:53.479 [2024-11-21 00:12:43.689193] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 50432 00:23:53.479 [2024-11-21 00:12:43.689211] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0190 00:23:53.479 [2024-11-21 00:12:43.689220] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:53.479 [2024-11-21 00:12:43.689229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:53.479 [2024-11-21 00:12:43.689237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:53.479 [2024-11-21 00:12:43.689244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:53.479 [2024-11-21 00:12:43.689251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:53.479 [2024-11-21 00:12:43.689259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.479 [2024-11-21 00:12:43.689269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:53.479 [2024-11-21 00:12:43.689277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.310 ms 00:23:53.479 [2024-11-21 00:12:43.689290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.479 [2024-11-21 00:12:43.692527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.479 [2024-11-21 00:12:43.692689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:53.479 [2024-11-21 00:12:43.692758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.033 ms 00:23:53.479 [2024-11-21 00:12:43.692783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.479 [2024-11-21 00:12:43.692976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.479 [2024-11-21 00:12:43.693033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:53.479 [2024-11-21 00:12:43.693108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:23:53.479 [2024-11-21 00:12:43.693132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.479 [2024-11-21 00:12:43.702136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.479 [2024-11-21 00:12:43.702293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.479 [2024-11-21 00:12:43.702374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.479 [2024-11-21 00:12:43.702407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.479 [2024-11-21 00:12:43.702481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.479 [2024-11-21 00:12:43.702506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.479 [2024-11-21 00:12:43.702526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.479 [2024-11-21 00:12:43.702546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.479 [2024-11-21 00:12:43.702623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.479 [2024-11-21 00:12:43.702655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.479 [2024-11-21 00:12:43.702684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.479 [2024-11-21 00:12:43.702754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.479 [2024-11-21 00:12:43.702778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.479 [2024-11-21 00:12:43.702789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.479 [2024-11-21 00:12:43.702799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.479 [2024-11-21 00:12:43.702809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.479 [2024-11-21 00:12:43.722027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.479 [2024-11-21 00:12:43.722088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.480 [2024-11-21 00:12:43.722100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.480 [2024-11-21 00:12:43.722110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.480 [2024-11-21 00:12:43.737428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.480 [2024-11-21 00:12:43.737484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.480 [2024-11-21 00:12:43.737497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.480 [2024-11-21 00:12:43.737507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.480 [2024-11-21 00:12:43.737576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.480 [2024-11-21 00:12:43.737587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.480 [2024-11-21 00:12:43.737604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.480 [2024-11-21 00:12:43.737613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.480 [2024-11-21 00:12:43.737653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.480 [2024-11-21 00:12:43.737666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.480 [2024-11-21 00:12:43.737676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.480 [2024-11-21 00:12:43.737686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.480 [2024-11-21 00:12:43.737769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.480 [2024-11-21 00:12:43.737781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.480 [2024-11-21 00:12:43.737790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.480 [2024-11-21 00:12:43.737803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.480 [2024-11-21 00:12:43.737839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.480 [2024-11-21 00:12:43.737850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:53.480 [2024-11-21 00:12:43.737860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.480 [2024-11-21 00:12:43.737874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.480 [2024-11-21 00:12:43.737928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.480 [2024-11-21 00:12:43.737940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.480 [2024-11-21 00:12:43.737949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.480 [2024-11-21 00:12:43.737960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.480 [2024-11-21 00:12:43.738021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.480 [2024-11-21 00:12:43.738035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.480 [2024-11-21 00:12:43.738046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.480 [2024-11-21 00:12:43.738056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.480 [2024-11-21 00:12:43.738221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 449.817 ms, result 0 00:23:53.741 00:23:53.742 00:23:53.742 00:12:44 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:56.285 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:56.285 00:12:46 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:56.285 00:12:46 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:56.285 00:12:46 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:56.285 00:12:46 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:56.285 00:12:46 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:56.285 Process with pid 86032 is not found 00:23:56.285 Remove shared memory files 00:23:56.285 00:12:46 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86032 00:23:56.285 00:12:46 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86032 ']' 00:23:56.285 00:12:46 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86032 00:23:56.285 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86032) - No such process 00:23:56.285 00:12:46 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 86032 is not found' 00:23:56.285 00:12:46 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:56.285 00:12:46 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:56.285 00:12:46 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:56.285 00:12:46 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:56.285 00:12:46 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:56.285 00:12:46 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:56.285 00:12:46 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:56.285 ************************************ 00:23:56.285 END TEST ftl_restore 00:23:56.285 ************************************ 00:23:56.285 00:23:56.285 real 5m57.638s 00:23:56.285 user 5m45.149s 00:23:56.285 sys 0m12.205s 00:23:56.285 00:12:46 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:56.285 00:12:46 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:56.285 00:12:46 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:56.285 00:12:46 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:56.285 00:12:46 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:56.285 00:12:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:56.285 ************************************ 00:23:56.285 START TEST ftl_dirty_shutdown 00:23:56.285 ************************************ 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:56.285 * Looking for test storage... 00:23:56.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.285 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:56.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.286 --rc genhtml_branch_coverage=1 00:23:56.286 --rc genhtml_function_coverage=1 00:23:56.286 --rc genhtml_legend=1 00:23:56.286 --rc geninfo_all_blocks=1 00:23:56.286 --rc geninfo_unexecuted_blocks=1 00:23:56.286 00:23:56.286 ' 00:23:56.286 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:56.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.286 --rc genhtml_branch_coverage=1 00:23:56.286 --rc genhtml_function_coverage=1 00:23:56.286 --rc genhtml_legend=1 00:23:56.286 --rc geninfo_all_blocks=1 00:23:56.286 --rc geninfo_unexecuted_blocks=1 00:23:56.286 00:23:56.286 ' 00:23:56.286 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:56.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.286 --rc genhtml_branch_coverage=1 00:23:56.286 --rc genhtml_function_coverage=1 00:23:56.286 --rc genhtml_legend=1 00:23:56.286 --rc geninfo_all_blocks=1 00:23:56.286 --rc geninfo_unexecuted_blocks=1 00:23:56.286 00:23:56.286 ' 00:23:56.286 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:56.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.286 --rc genhtml_branch_coverage=1 00:23:56.286 --rc genhtml_function_coverage=1 00:23:56.286 --rc genhtml_legend=1 00:23:56.286 --rc geninfo_all_blocks=1 00:23:56.286 --rc geninfo_unexecuted_blocks=1 00:23:56.286 00:23:56.286 ' 00:23:56.286 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:56.286 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:56.286 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89807 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89807 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89807 ']' 00:23:56.547 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.548 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:56.548 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.548 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:56.548 00:12:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:56.548 [2024-11-21 00:12:46.801198] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:56.548 [2024-11-21 00:12:46.801762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89807 ] 00:23:56.548 [2024-11-21 00:12:46.938536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.809 [2024-11-21 00:12:47.010697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.382 00:12:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.382 00:12:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:57.382 00:12:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:57.382 00:12:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:57.382 00:12:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:57.382 00:12:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:57.382 00:12:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:57.382 00:12:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:57.644 00:12:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:57.644 00:12:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:57.644 00:12:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:57.644 00:12:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:57.644 00:12:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:57.644 00:12:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:57.644 00:12:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:57.644 00:12:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:57.905 00:12:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:57.905 { 00:23:57.905 "name": "nvme0n1", 00:23:57.905 "aliases": [ 00:23:57.905 "1671f75a-9359-4549-863e-a08fc8ffff26" 00:23:57.905 ], 00:23:57.905 "product_name": "NVMe disk", 00:23:57.905 "block_size": 4096, 00:23:57.905 "num_blocks": 1310720, 00:23:57.905 "uuid": "1671f75a-9359-4549-863e-a08fc8ffff26", 00:23:57.905 "numa_id": -1, 00:23:57.905 "assigned_rate_limits": { 00:23:57.905 "rw_ios_per_sec": 0, 00:23:57.905 "rw_mbytes_per_sec": 0, 00:23:57.905 "r_mbytes_per_sec": 0, 00:23:57.905 "w_mbytes_per_sec": 0 00:23:57.905 }, 00:23:57.905 "claimed": true, 00:23:57.905 "claim_type": "read_many_write_one", 00:23:57.905 "zoned": false, 00:23:57.905 "supported_io_types": { 00:23:57.905 "read": true, 00:23:57.905 "write": true, 00:23:57.905 "unmap": true, 00:23:57.905 "flush": true, 00:23:57.905 "reset": true, 00:23:57.905 "nvme_admin": true, 00:23:57.905 "nvme_io": true, 00:23:57.905 "nvme_io_md": false, 00:23:57.905 "write_zeroes": true, 00:23:57.906 "zcopy": false, 00:23:57.906 "get_zone_info": false, 00:23:57.906 "zone_management": false, 00:23:57.906 "zone_append": false, 00:23:57.906 "compare": true, 00:23:57.906 "compare_and_write": false, 00:23:57.906 "abort": true, 00:23:57.906 "seek_hole": false, 00:23:57.906 "seek_data": false, 00:23:57.906 "copy": true, 00:23:57.906 "nvme_iov_md": false 00:23:57.906 }, 00:23:57.906 "driver_specific": { 00:23:57.906 "nvme": [ 00:23:57.906 { 00:23:57.906 "pci_address": "0000:00:11.0", 00:23:57.906 "trid": { 00:23:57.906 "trtype": "PCIe", 00:23:57.906 "traddr": "0000:00:11.0" 00:23:57.906 }, 00:23:57.906 "ctrlr_data": { 00:23:57.906 "cntlid": 0, 00:23:57.906 "vendor_id": "0x1b36", 00:23:57.906 "model_number": "QEMU NVMe Ctrl", 00:23:57.906 "serial_number": "12341", 00:23:57.906 "firmware_revision": "8.0.0", 00:23:57.906 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:57.906 "oacs": { 00:23:57.906 "security": 0, 00:23:57.906 "format": 1, 00:23:57.906 "firmware": 0, 00:23:57.906 "ns_manage": 1 00:23:57.906 }, 00:23:57.906 "multi_ctrlr": false, 00:23:57.906 "ana_reporting": false 00:23:57.906 }, 00:23:57.906 "vs": { 00:23:57.906 "nvme_version": "1.4" 00:23:57.906 }, 00:23:57.906 "ns_data": { 00:23:57.906 "id": 1, 00:23:57.906 "can_share": false 00:23:57.906 } 00:23:57.906 } 00:23:57.906 ], 00:23:57.906 "mp_policy": "active_passive" 00:23:57.906 } 00:23:57.906 } 00:23:57.906 ]' 00:23:57.906 00:12:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:57.906 00:12:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:57.906 00:12:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:57.906 00:12:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:57.906 00:12:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:57.906 00:12:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:23:57.906 00:12:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:57.906 00:12:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:57.906 00:12:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:57.906 00:12:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:57.906 00:12:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:58.167 00:12:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=f687c260-e989-4475-9d6d-04abf91b28e2 00:23:58.167 00:12:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:58.167 00:12:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f687c260-e989-4475-9d6d-04abf91b28e2 00:23:58.428 00:12:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:58.690 00:12:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=10312962-73b9-47dd-ace5-d92fdc507261 00:23:58.690 00:12:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 10312962-73b9-47dd-ace5-d92fdc507261 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=4eb46774-9d77-48e6-a351-10185b6c91d2 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4eb46774-9d77-48e6-a351-10185b6c91d2 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=4eb46774-9d77-48e6-a351-10185b6c91d2 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4eb46774-9d77-48e6-a351-10185b6c91d2 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=4eb46774-9d77-48e6-a351-10185b6c91d2 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4eb46774-9d77-48e6-a351-10185b6c91d2 00:23:58.952 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:58.952 { 00:23:58.952 "name": "4eb46774-9d77-48e6-a351-10185b6c91d2", 00:23:58.952 "aliases": [ 00:23:58.952 "lvs/nvme0n1p0" 00:23:58.952 ], 00:23:58.952 "product_name": "Logical Volume", 00:23:58.952 "block_size": 4096, 00:23:58.952 "num_blocks": 26476544, 00:23:58.952 "uuid": "4eb46774-9d77-48e6-a351-10185b6c91d2", 00:23:58.952 "assigned_rate_limits": { 00:23:58.952 "rw_ios_per_sec": 0, 00:23:58.952 "rw_mbytes_per_sec": 0, 00:23:58.952 "r_mbytes_per_sec": 0, 00:23:58.953 "w_mbytes_per_sec": 0 00:23:58.953 }, 00:23:58.953 "claimed": false, 00:23:58.953 "zoned": false, 00:23:58.953 "supported_io_types": { 00:23:58.953 "read": true, 00:23:58.953 "write": true, 00:23:58.953 "unmap": true, 00:23:58.953 "flush": false, 00:23:58.953 "reset": true, 00:23:58.953 "nvme_admin": false, 00:23:58.953 "nvme_io": false, 00:23:58.953 "nvme_io_md": false, 00:23:58.953 "write_zeroes": true, 00:23:58.953 "zcopy": false, 00:23:58.953 "get_zone_info": false, 00:23:58.953 "zone_management": false, 00:23:58.953 "zone_append": false, 00:23:58.953 "compare": false, 00:23:58.953 "compare_and_write": false, 00:23:58.953 "abort": false, 00:23:58.953 "seek_hole": true, 00:23:58.953 "seek_data": true, 00:23:58.953 "copy": false, 00:23:58.953 "nvme_iov_md": false 00:23:58.953 }, 00:23:58.953 "driver_specific": { 00:23:58.953 "lvol": { 00:23:58.953 "lvol_store_uuid": "10312962-73b9-47dd-ace5-d92fdc507261", 00:23:58.953 "base_bdev": "nvme0n1", 00:23:58.953 "thin_provision": true, 00:23:58.953 "num_allocated_clusters": 0, 00:23:58.953 "snapshot": false, 00:23:58.953 "clone": false, 00:23:58.953 "esnap_clone": false 00:23:58.953 } 00:23:58.953 } 00:23:58.953 } 00:23:58.953 ]' 00:23:58.953 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:59.215 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:59.215 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:59.215 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:59.215 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:59.215 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:59.215 00:12:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:59.215 00:12:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:59.215 00:12:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:59.476 00:12:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:59.476 00:12:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:59.476 00:12:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 4eb46774-9d77-48e6-a351-10185b6c91d2 00:23:59.476 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=4eb46774-9d77-48e6-a351-10185b6c91d2 00:23:59.476 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:59.476 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:59.476 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:59.476 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4eb46774-9d77-48e6-a351-10185b6c91d2 00:23:59.873 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:59.873 { 00:23:59.873 "name": "4eb46774-9d77-48e6-a351-10185b6c91d2", 00:23:59.873 "aliases": [ 00:23:59.873 "lvs/nvme0n1p0" 00:23:59.873 ], 00:23:59.873 "product_name": "Logical Volume", 00:23:59.873 "block_size": 4096, 00:23:59.873 "num_blocks": 26476544, 00:23:59.873 "uuid": "4eb46774-9d77-48e6-a351-10185b6c91d2", 00:23:59.873 "assigned_rate_limits": { 00:23:59.873 "rw_ios_per_sec": 0, 00:23:59.873 "rw_mbytes_per_sec": 0, 00:23:59.873 "r_mbytes_per_sec": 0, 00:23:59.873 "w_mbytes_per_sec": 0 00:23:59.873 }, 00:23:59.873 "claimed": false, 00:23:59.873 "zoned": false, 00:23:59.873 "supported_io_types": { 00:23:59.873 "read": true, 00:23:59.873 "write": true, 00:23:59.873 "unmap": true, 00:23:59.873 "flush": false, 00:23:59.873 "reset": true, 00:23:59.873 "nvme_admin": false, 00:23:59.873 "nvme_io": false, 00:23:59.873 "nvme_io_md": false, 00:23:59.873 "write_zeroes": true, 00:23:59.873 "zcopy": false, 00:23:59.873 "get_zone_info": false, 00:23:59.873 "zone_management": false, 00:23:59.873 "zone_append": false, 00:23:59.873 "compare": false, 00:23:59.873 "compare_and_write": false, 00:23:59.873 "abort": false, 00:23:59.873 "seek_hole": true, 00:23:59.873 "seek_data": true, 00:23:59.873 "copy": false, 00:23:59.873 "nvme_iov_md": false 00:23:59.873 }, 00:23:59.873 "driver_specific": { 00:23:59.873 "lvol": { 00:23:59.873 "lvol_store_uuid": "10312962-73b9-47dd-ace5-d92fdc507261", 00:23:59.873 "base_bdev": "nvme0n1", 00:23:59.873 "thin_provision": true, 00:23:59.873 "num_allocated_clusters": 0, 00:23:59.873 "snapshot": false, 00:23:59.873 "clone": false, 00:23:59.873 "esnap_clone": false 00:23:59.873 } 00:23:59.873 } 00:23:59.873 } 00:23:59.873 ]' 00:23:59.873 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:59.873 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:59.873 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:59.873 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:59.873 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:59.873 00:12:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:59.873 00:12:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:59.873 00:12:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:59.873 00:12:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:59.873 00:12:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 4eb46774-9d77-48e6-a351-10185b6c91d2 00:23:59.873 00:12:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=4eb46774-9d77-48e6-a351-10185b6c91d2 00:23:59.873 00:12:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:59.873 00:12:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:59.873 00:12:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:59.873 00:12:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4eb46774-9d77-48e6-a351-10185b6c91d2 00:24:00.138 00:12:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:00.138 { 00:24:00.138 "name": "4eb46774-9d77-48e6-a351-10185b6c91d2", 00:24:00.138 "aliases": [ 00:24:00.138 "lvs/nvme0n1p0" 00:24:00.138 ], 00:24:00.138 "product_name": "Logical Volume", 00:24:00.138 "block_size": 4096, 00:24:00.138 "num_blocks": 26476544, 00:24:00.138 "uuid": "4eb46774-9d77-48e6-a351-10185b6c91d2", 00:24:00.138 "assigned_rate_limits": { 00:24:00.138 "rw_ios_per_sec": 0, 00:24:00.138 "rw_mbytes_per_sec": 0, 00:24:00.138 "r_mbytes_per_sec": 0, 00:24:00.138 "w_mbytes_per_sec": 0 00:24:00.138 }, 00:24:00.138 "claimed": false, 00:24:00.138 "zoned": false, 00:24:00.138 "supported_io_types": { 00:24:00.138 "read": true, 00:24:00.138 "write": true, 00:24:00.138 "unmap": true, 00:24:00.138 "flush": false, 00:24:00.138 "reset": true, 00:24:00.138 "nvme_admin": false, 00:24:00.138 "nvme_io": false, 00:24:00.138 "nvme_io_md": false, 00:24:00.138 "write_zeroes": true, 00:24:00.138 "zcopy": false, 00:24:00.138 "get_zone_info": false, 00:24:00.138 "zone_management": false, 00:24:00.138 "zone_append": false, 00:24:00.138 "compare": false, 00:24:00.138 "compare_and_write": false, 00:24:00.138 "abort": false, 00:24:00.138 "seek_hole": true, 00:24:00.138 "seek_data": true, 00:24:00.138 "copy": false, 00:24:00.138 "nvme_iov_md": false 00:24:00.138 }, 00:24:00.138 "driver_specific": { 00:24:00.138 "lvol": { 00:24:00.138 "lvol_store_uuid": "10312962-73b9-47dd-ace5-d92fdc507261", 00:24:00.138 "base_bdev": "nvme0n1", 00:24:00.138 "thin_provision": true, 00:24:00.138 "num_allocated_clusters": 0, 00:24:00.138 "snapshot": false, 00:24:00.138 "clone": false, 00:24:00.138 "esnap_clone": false 00:24:00.138 } 00:24:00.138 } 00:24:00.138 } 00:24:00.138 ]' 00:24:00.138 00:12:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:00.138 00:12:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:00.138 00:12:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:00.138 00:12:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:00.138 00:12:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:00.138 00:12:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:24:00.138 00:12:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:00.138 00:12:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4eb46774-9d77-48e6-a351-10185b6c91d2 --l2p_dram_limit 10' 00:24:00.138 00:12:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:00.138 00:12:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:00.138 00:12:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:00.138 00:12:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4eb46774-9d77-48e6-a351-10185b6c91d2 --l2p_dram_limit 10 -c nvc0n1p0 00:24:00.402 [2024-11-21 00:12:50.637672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.402 [2024-11-21 00:12:50.637726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:00.402 [2024-11-21 00:12:50.637741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:00.402 [2024-11-21 00:12:50.637750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.402 [2024-11-21 00:12:50.637803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.402 [2024-11-21 00:12:50.637813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:00.402 [2024-11-21 00:12:50.637820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:00.402 [2024-11-21 00:12:50.637829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.402 [2024-11-21 00:12:50.637847] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:00.402 [2024-11-21 00:12:50.638088] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:00.402 [2024-11-21 00:12:50.638100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.402 [2024-11-21 00:12:50.638109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:00.402 [2024-11-21 00:12:50.638119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:24:00.402 [2024-11-21 00:12:50.638127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.402 [2024-11-21 00:12:50.638347] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f1f9902c-6dc4-4138-98d5-da0a85f2c424 00:24:00.402 [2024-11-21 00:12:50.639661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.402 [2024-11-21 00:12:50.639684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:00.402 [2024-11-21 00:12:50.639694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:00.402 [2024-11-21 00:12:50.639701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.402 [2024-11-21 00:12:50.646485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.403 [2024-11-21 00:12:50.646512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:00.403 [2024-11-21 00:12:50.646525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.740 ms 00:24:00.403 [2024-11-21 00:12:50.646534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.403 [2024-11-21 00:12:50.646634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.403 [2024-11-21 00:12:50.646644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:00.403 [2024-11-21 00:12:50.646653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:00.403 [2024-11-21 00:12:50.646663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.403 [2024-11-21 00:12:50.646702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.403 [2024-11-21 00:12:50.646712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:00.403 [2024-11-21 00:12:50.646723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:00.403 [2024-11-21 00:12:50.646728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.403 [2024-11-21 00:12:50.646748] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:00.403 [2024-11-21 00:12:50.648392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.403 [2024-11-21 00:12:50.648530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:00.403 [2024-11-21 00:12:50.648544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:24:00.403 [2024-11-21 00:12:50.648566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.403 [2024-11-21 00:12:50.648597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.403 [2024-11-21 00:12:50.648609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:00.403 [2024-11-21 00:12:50.648619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:00.403 [2024-11-21 00:12:50.648628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.403 [2024-11-21 00:12:50.648642] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:00.403 [2024-11-21 00:12:50.648766] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:00.403 [2024-11-21 00:12:50.648776] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:00.403 [2024-11-21 00:12:50.648787] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:00.403 [2024-11-21 00:12:50.648795] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:00.403 [2024-11-21 00:12:50.648806] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:00.403 [2024-11-21 00:12:50.648812] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:00.403 [2024-11-21 00:12:50.648823] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:00.403 [2024-11-21 00:12:50.648829] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:00.403 [2024-11-21 00:12:50.648836] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:00.403 [2024-11-21 00:12:50.648846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.403 [2024-11-21 00:12:50.648853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:00.403 [2024-11-21 00:12:50.648860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:24:00.403 [2024-11-21 00:12:50.648871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.403 [2024-11-21 00:12:50.648935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.403 [2024-11-21 00:12:50.648945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:00.403 [2024-11-21 00:12:50.648951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:00.403 [2024-11-21 00:12:50.648958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.403 [2024-11-21 00:12:50.649031] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:00.403 [2024-11-21 00:12:50.649041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:00.403 [2024-11-21 00:12:50.649048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:00.403 [2024-11-21 00:12:50.649056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.403 [2024-11-21 00:12:50.649062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:00.403 [2024-11-21 00:12:50.649071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:00.403 [2024-11-21 00:12:50.649076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:00.403 [2024-11-21 00:12:50.649083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:00.403 [2024-11-21 00:12:50.649089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:00.403 [2024-11-21 00:12:50.649096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:00.403 [2024-11-21 00:12:50.649101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:00.403 [2024-11-21 00:12:50.649109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:00.403 [2024-11-21 00:12:50.649116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:00.403 [2024-11-21 00:12:50.649126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:00.403 [2024-11-21 00:12:50.649133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:00.403 [2024-11-21 00:12:50.649140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.403 [2024-11-21 00:12:50.649147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:00.403 [2024-11-21 00:12:50.649155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:00.403 [2024-11-21 00:12:50.649161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.403 [2024-11-21 00:12:50.649169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:00.403 [2024-11-21 00:12:50.649175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:00.403 [2024-11-21 00:12:50.649182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.403 [2024-11-21 00:12:50.649188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:00.403 [2024-11-21 00:12:50.649196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:00.403 [2024-11-21 00:12:50.649202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.403 [2024-11-21 00:12:50.649209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:00.403 [2024-11-21 00:12:50.649215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:00.403 [2024-11-21 00:12:50.649222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.403 [2024-11-21 00:12:50.649228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:00.403 [2024-11-21 00:12:50.649237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:00.403 [2024-11-21 00:12:50.649243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.403 [2024-11-21 00:12:50.649252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:00.403 [2024-11-21 00:12:50.649257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:00.403 [2024-11-21 00:12:50.649264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:00.403 [2024-11-21 00:12:50.649270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:00.403 [2024-11-21 00:12:50.649279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:00.403 [2024-11-21 00:12:50.649285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:00.403 [2024-11-21 00:12:50.649291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:00.403 [2024-11-21 00:12:50.649312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:00.403 [2024-11-21 00:12:50.649320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.403 [2024-11-21 00:12:50.649327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:00.403 [2024-11-21 00:12:50.649334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:00.403 [2024-11-21 00:12:50.649339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.403 [2024-11-21 00:12:50.649347] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:00.403 [2024-11-21 00:12:50.649359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:00.403 [2024-11-21 00:12:50.649369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:00.403 [2024-11-21 00:12:50.649377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.403 [2024-11-21 00:12:50.649385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:00.403 [2024-11-21 00:12:50.649392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:00.403 [2024-11-21 00:12:50.649400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:00.403 [2024-11-21 00:12:50.649406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:00.403 [2024-11-21 00:12:50.649414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:00.403 [2024-11-21 00:12:50.649420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:00.403 [2024-11-21 00:12:50.649431] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:00.403 [2024-11-21 00:12:50.649439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:00.403 [2024-11-21 00:12:50.649451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:00.403 [2024-11-21 00:12:50.649459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:00.403 [2024-11-21 00:12:50.649467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:00.403 [2024-11-21 00:12:50.649473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:00.403 [2024-11-21 00:12:50.649481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:00.403 [2024-11-21 00:12:50.649488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:00.403 [2024-11-21 00:12:50.649497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:00.403 [2024-11-21 00:12:50.649502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:00.403 [2024-11-21 00:12:50.649517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:00.404 [2024-11-21 00:12:50.649523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:00.404 [2024-11-21 00:12:50.649530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:00.404 [2024-11-21 00:12:50.649535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:00.404 [2024-11-21 00:12:50.649542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:00.404 [2024-11-21 00:12:50.649547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:00.404 [2024-11-21 00:12:50.649554] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:00.404 [2024-11-21 00:12:50.649562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:00.404 [2024-11-21 00:12:50.649570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:00.404 [2024-11-21 00:12:50.649575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:00.404 [2024-11-21 00:12:50.649583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:00.404 [2024-11-21 00:12:50.649589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:00.404 [2024-11-21 00:12:50.649596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.404 [2024-11-21 00:12:50.649602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:00.404 [2024-11-21 00:12:50.649611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:24:00.404 [2024-11-21 00:12:50.649616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.404 [2024-11-21 00:12:50.649650] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:00.404 [2024-11-21 00:12:50.649661] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:03.687 [2024-11-21 00:12:53.913662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.687 [2024-11-21 00:12:53.913724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:03.687 [2024-11-21 00:12:53.913742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3263.998 ms 00:24:03.687 [2024-11-21 00:12:53.913749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.687 [2024-11-21 00:12:53.924240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.687 [2024-11-21 00:12:53.924277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:03.687 [2024-11-21 00:12:53.924289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.406 ms 00:24:03.687 [2024-11-21 00:12:53.924310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.687 [2024-11-21 00:12:53.924401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.687 [2024-11-21 00:12:53.924409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:03.687 [2024-11-21 00:12:53.924419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:03.687 [2024-11-21 00:12:53.924426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.687 [2024-11-21 00:12:53.933492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.687 [2024-11-21 00:12:53.933642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:03.687 [2024-11-21 00:12:53.933658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.029 ms 00:24:03.687 [2024-11-21 00:12:53.933665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.687 [2024-11-21 00:12:53.933691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.687 [2024-11-21 00:12:53.933698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:03.687 [2024-11-21 00:12:53.933708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:03.687 [2024-11-21 00:12:53.933714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.687 [2024-11-21 00:12:53.934347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.687 [2024-11-21 00:12:53.934367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:03.687 [2024-11-21 00:12:53.934377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:24:03.687 [2024-11-21 00:12:53.934384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.687 [2024-11-21 00:12:53.934479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.687 [2024-11-21 00:12:53.934487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:03.687 [2024-11-21 00:12:53.934496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:03.687 [2024-11-21 00:12:53.934505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.687 [2024-11-21 00:12:53.957183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.687 [2024-11-21 00:12:53.957222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:03.687 [2024-11-21 00:12:53.957241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.647 ms 00:24:03.687 [2024-11-21 00:12:53.957250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.687 [2024-11-21 00:12:53.966411] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:03.687 [2024-11-21 00:12:53.969754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.687 [2024-11-21 00:12:53.969781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:03.687 [2024-11-21 00:12:53.969790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.393 ms 00:24:03.687 [2024-11-21 00:12:53.969799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.687 [2024-11-21 00:12:54.037826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.687 [2024-11-21 00:12:54.037858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:03.687 [2024-11-21 00:12:54.037867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.004 ms 00:24:03.687 [2024-11-21 00:12:54.037882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.687 [2024-11-21 00:12:54.038033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.687 [2024-11-21 00:12:54.038044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:03.688 [2024-11-21 00:12:54.038052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:24:03.688 [2024-11-21 00:12:54.038063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.688 [2024-11-21 00:12:54.041757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.688 [2024-11-21 00:12:54.041786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:03.688 [2024-11-21 00:12:54.041794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:24:03.688 [2024-11-21 00:12:54.041802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.688 [2024-11-21 00:12:54.044843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.688 [2024-11-21 00:12:54.044955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:03.688 [2024-11-21 00:12:54.044968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.011 ms 00:24:03.688 [2024-11-21 00:12:54.044975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.688 [2024-11-21 00:12:54.045216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.688 [2024-11-21 00:12:54.045226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:03.688 [2024-11-21 00:12:54.045233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:24:03.688 [2024-11-21 00:12:54.045242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.688 [2024-11-21 00:12:54.076062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.688 [2024-11-21 00:12:54.076171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:03.688 [2024-11-21 00:12:54.076184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.805 ms 00:24:03.688 [2024-11-21 00:12:54.076192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.688 [2024-11-21 00:12:54.080930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.688 [2024-11-21 00:12:54.080958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:03.688 [2024-11-21 00:12:54.080966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.701 ms 00:24:03.688 [2024-11-21 00:12:54.080974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.688 [2024-11-21 00:12:54.084350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.688 [2024-11-21 00:12:54.084376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:03.688 [2024-11-21 00:12:54.084383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.348 ms 00:24:03.688 [2024-11-21 00:12:54.084390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.688 [2024-11-21 00:12:54.088441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.688 [2024-11-21 00:12:54.088469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:03.688 [2024-11-21 00:12:54.088475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.024 ms 00:24:03.688 [2024-11-21 00:12:54.088484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.688 [2024-11-21 00:12:54.088515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.688 [2024-11-21 00:12:54.088525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:03.688 [2024-11-21 00:12:54.088536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:03.688 [2024-11-21 00:12:54.088544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.688 [2024-11-21 00:12:54.088613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.688 [2024-11-21 00:12:54.088622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:03.688 [2024-11-21 00:12:54.088629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:03.688 [2024-11-21 00:12:54.088637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.688 [2024-11-21 00:12:54.089471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3451.405 ms, result 0 00:24:03.688 { 00:24:03.688 "name": "ftl0", 00:24:03.688 "uuid": "f1f9902c-6dc4-4138-98d5-da0a85f2c424" 00:24:03.688 } 00:24:03.946 00:12:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:03.946 00:12:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:03.946 00:12:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:03.946 00:12:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:03.946 00:12:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:04.205 /dev/nbd0 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:04.205 1+0 records in 00:24:04.205 1+0 records out 00:24:04.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255215 s, 16.0 MB/s 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:24:04.205 00:12:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:04.205 [2024-11-21 00:12:54.601778] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:04.205 [2024-11-21 00:12:54.601892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89943 ] 00:24:04.464 [2024-11-21 00:12:54.737623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.464 [2024-11-21 00:12:54.770637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.838  [2024-11-21T00:12:56.826Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-21T00:12:58.200Z] Copying: 433/1024 [MB] (237 MBps) [2024-11-21T00:12:59.135Z] Copying: 698/1024 [MB] (264 MBps) [2024-11-21T00:12:59.135Z] Copying: 957/1024 [MB] (259 MBps) [2024-11-21T00:12:59.401Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:24:08.980 00:24:08.980 00:12:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:10.880 00:13:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:10.880 [2024-11-21 00:13:01.250042] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:10.880 [2024-11-21 00:13:01.250153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90015 ] 00:24:11.140 [2024-11-21 00:13:01.385284] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.140 [2024-11-21 00:13:01.417856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.077  [2024-11-21T00:13:03.873Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-21T00:13:04.815Z] Copying: 37/1024 [MB] (22 MBps) [2024-11-21T00:13:05.757Z] Copying: 70/1024 [MB] (33 MBps) [2024-11-21T00:13:06.695Z] Copying: 97/1024 [MB] (26 MBps) [2024-11-21T00:13:07.635Z] Copying: 126/1024 [MB] (29 MBps) [2024-11-21T00:13:08.575Z] Copying: 153/1024 [MB] (26 MBps) [2024-11-21T00:13:09.511Z] Copying: 179/1024 [MB] (25 MBps) [2024-11-21T00:13:10.887Z] Copying: 211/1024 [MB] (31 MBps) [2024-11-21T00:13:11.827Z] Copying: 242/1024 [MB] (31 MBps) [2024-11-21T00:13:12.771Z] Copying: 273/1024 [MB] (30 MBps) [2024-11-21T00:13:13.713Z] Copying: 300/1024 [MB] (26 MBps) [2024-11-21T00:13:14.650Z] Copying: 330/1024 [MB] (29 MBps) [2024-11-21T00:13:15.590Z] Copying: 361/1024 [MB] (31 MBps) [2024-11-21T00:13:16.525Z] Copying: 392/1024 [MB] (30 MBps) [2024-11-21T00:13:17.903Z] Copying: 421/1024 [MB] (28 MBps) [2024-11-21T00:13:18.472Z] Copying: 454/1024 [MB] (33 MBps) [2024-11-21T00:13:19.872Z] Copying: 486/1024 [MB] (32 MBps) [2024-11-21T00:13:20.815Z] Copying: 520/1024 [MB] (33 MBps) [2024-11-21T00:13:21.754Z] Copying: 552/1024 [MB] (31 MBps) [2024-11-21T00:13:22.694Z] Copying: 583/1024 [MB] (31 MBps) [2024-11-21T00:13:23.630Z] Copying: 615/1024 [MB] (31 MBps) [2024-11-21T00:13:24.568Z] Copying: 648/1024 [MB] (33 MBps) [2024-11-21T00:13:25.504Z] Copying: 679/1024 [MB] (30 MBps) [2024-11-21T00:13:26.886Z] Copying: 712/1024 [MB] (33 MBps) [2024-11-21T00:13:27.821Z] Copying: 743/1024 [MB] (30 MBps) [2024-11-21T00:13:28.757Z] Copying: 777/1024 [MB] (33 MBps) [2024-11-21T00:13:29.783Z] Copying: 808/1024 [MB] (30 MBps) [2024-11-21T00:13:30.726Z] Copying: 845/1024 [MB] (37 MBps) [2024-11-21T00:13:31.666Z] Copying: 880/1024 [MB] (34 MBps) [2024-11-21T00:13:32.604Z] Copying: 910/1024 [MB] (29 MBps) [2024-11-21T00:13:33.544Z] Copying: 941/1024 [MB] (31 MBps) [2024-11-21T00:13:34.480Z] Copying: 971/1024 [MB] (30 MBps) [2024-11-21T00:13:35.046Z] Copying: 1006/1024 [MB] (35 MBps) [2024-11-21T00:13:35.304Z] Copying: 1024/1024 [MB] (average 30 MBps) 00:24:44.883 00:24:44.883 00:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:44.883 00:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:44.883 00:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:45.143 [2024-11-21 00:13:35.465693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.143 [2024-11-21 00:13:35.465739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:45.143 [2024-11-21 00:13:35.465753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:45.143 [2024-11-21 00:13:35.465760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.143 [2024-11-21 00:13:35.465781] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:45.143 [2024-11-21 00:13:35.466281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.143 [2024-11-21 00:13:35.466320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:45.143 [2024-11-21 00:13:35.466329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:24:45.143 [2024-11-21 00:13:35.466339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.143 [2024-11-21 00:13:35.467906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.143 [2024-11-21 00:13:35.468044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:45.143 [2024-11-21 00:13:35.468058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.550 ms 00:24:45.143 [2024-11-21 00:13:35.468066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.143 [2024-11-21 00:13:35.483975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.143 [2024-11-21 00:13:35.484067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:45.143 [2024-11-21 00:13:35.484112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.893 ms 00:24:45.143 [2024-11-21 00:13:35.484132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.143 [2024-11-21 00:13:35.488851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.143 [2024-11-21 00:13:35.488933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:45.143 [2024-11-21 00:13:35.488977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.670 ms 00:24:45.143 [2024-11-21 00:13:35.488997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.143 [2024-11-21 00:13:35.490347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.143 [2024-11-21 00:13:35.490439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:45.143 [2024-11-21 00:13:35.490516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:24:45.143 [2024-11-21 00:13:35.490536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.143 [2024-11-21 00:13:35.495005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.143 [2024-11-21 00:13:35.495092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:45.143 [2024-11-21 00:13:35.495132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.435 ms 00:24:45.143 [2024-11-21 00:13:35.495156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.143 [2024-11-21 00:13:35.495256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.143 [2024-11-21 00:13:35.495278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:45.143 [2024-11-21 00:13:35.495294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:45.143 [2024-11-21 00:13:35.495320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.143 [2024-11-21 00:13:35.498036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.143 [2024-11-21 00:13:35.498118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:45.143 [2024-11-21 00:13:35.498156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.693 ms 00:24:45.143 [2024-11-21 00:13:35.498175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.143 [2024-11-21 00:13:35.500117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.143 [2024-11-21 00:13:35.500205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:45.143 [2024-11-21 00:13:35.500243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.910 ms 00:24:45.143 [2024-11-21 00:13:35.500261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.143 [2024-11-21 00:13:35.501845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.143 [2024-11-21 00:13:35.501925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:45.143 [2024-11-21 00:13:35.501963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.552 ms 00:24:45.143 [2024-11-21 00:13:35.501981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.143 [2024-11-21 00:13:35.503696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.143 [2024-11-21 00:13:35.503774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:45.143 [2024-11-21 00:13:35.503811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:24:45.143 [2024-11-21 00:13:35.503829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.143 [2024-11-21 00:13:35.503861] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:45.143 [2024-11-21 00:13:35.503884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.503908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.503934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.503956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.504982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.505005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.505065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.505094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.505117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.505141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.505163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.505196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.505325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:45.143 [2024-11-21 00:13:35.505352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.505983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.506963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:45.144 [2024-11-21 00:13:35.507241] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:45.144 [2024-11-21 00:13:35.507248] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f1f9902c-6dc4-4138-98d5-da0a85f2c424 00:24:45.144 [2024-11-21 00:13:35.507260] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:45.144 [2024-11-21 00:13:35.507268] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:45.144 [2024-11-21 00:13:35.507277] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:45.144 [2024-11-21 00:13:35.507283] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:45.144 [2024-11-21 00:13:35.507291] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:45.144 [2024-11-21 00:13:35.507385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:45.144 [2024-11-21 00:13:35.507406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:45.144 [2024-11-21 00:13:35.507449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:45.144 [2024-11-21 00:13:35.507471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:45.144 [2024-11-21 00:13:35.507485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.144 [2024-11-21 00:13:35.507497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:45.144 [2024-11-21 00:13:35.507505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.626 ms 00:24:45.144 [2024-11-21 00:13:35.507512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.144 [2024-11-21 00:13:35.509128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.144 [2024-11-21 00:13:35.509151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:45.144 [2024-11-21 00:13:35.509162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:24:45.144 [2024-11-21 00:13:35.509170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.144 [2024-11-21 00:13:35.509266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.144 [2024-11-21 00:13:35.509275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:45.144 [2024-11-21 00:13:35.509282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:45.144 [2024-11-21 00:13:35.509289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.144 [2024-11-21 00:13:35.515123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.144 [2024-11-21 00:13:35.515241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:45.144 [2024-11-21 00:13:35.515254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.144 [2024-11-21 00:13:35.515262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.144 [2024-11-21 00:13:35.515350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.144 [2024-11-21 00:13:35.515360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:45.144 [2024-11-21 00:13:35.515367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.144 [2024-11-21 00:13:35.515375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.144 [2024-11-21 00:13:35.515442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.144 [2024-11-21 00:13:35.515463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:45.144 [2024-11-21 00:13:35.515470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.144 [2024-11-21 00:13:35.515477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.144 [2024-11-21 00:13:35.515491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.144 [2024-11-21 00:13:35.515500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:45.144 [2024-11-21 00:13:35.515506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.144 [2024-11-21 00:13:35.515513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.144 [2024-11-21 00:13:35.525754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.145 [2024-11-21 00:13:35.525885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:45.145 [2024-11-21 00:13:35.525897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.145 [2024-11-21 00:13:35.525906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.145 [2024-11-21 00:13:35.534754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.145 [2024-11-21 00:13:35.534792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:45.145 [2024-11-21 00:13:35.534801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.145 [2024-11-21 00:13:35.534810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.145 [2024-11-21 00:13:35.534910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.145 [2024-11-21 00:13:35.534924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:45.145 [2024-11-21 00:13:35.534931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.145 [2024-11-21 00:13:35.534938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.145 [2024-11-21 00:13:35.534969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.145 [2024-11-21 00:13:35.534982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:45.145 [2024-11-21 00:13:35.534989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.145 [2024-11-21 00:13:35.534997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.145 [2024-11-21 00:13:35.535055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.145 [2024-11-21 00:13:35.535066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:45.145 [2024-11-21 00:13:35.535077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.145 [2024-11-21 00:13:35.535084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.145 [2024-11-21 00:13:35.535109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.145 [2024-11-21 00:13:35.535118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:45.145 [2024-11-21 00:13:35.535125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.145 [2024-11-21 00:13:35.535133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.145 [2024-11-21 00:13:35.535166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.145 [2024-11-21 00:13:35.535176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:45.145 [2024-11-21 00:13:35.535184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.145 [2024-11-21 00:13:35.535192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.145 [2024-11-21 00:13:35.535233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.145 [2024-11-21 00:13:35.535246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:45.145 [2024-11-21 00:13:35.535252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.145 [2024-11-21 00:13:35.535260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.145 [2024-11-21 00:13:35.535567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.833 ms, result 0 00:24:45.145 true 00:24:45.145 00:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89807 00:24:45.402 00:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89807 00:24:45.402 00:13:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:45.402 [2024-11-21 00:13:35.619615] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:45.402 [2024-11-21 00:13:35.619719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90375 ] 00:24:45.402 [2024-11-21 00:13:35.751638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.402 [2024-11-21 00:13:35.792892] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.811  [2024-11-21T00:13:38.165Z] Copying: 257/1024 [MB] (257 MBps) [2024-11-21T00:13:39.100Z] Copying: 513/1024 [MB] (256 MBps) [2024-11-21T00:13:40.035Z] Copying: 768/1024 [MB] (254 MBps) [2024-11-21T00:13:40.035Z] Copying: 1021/1024 [MB] (253 MBps) [2024-11-21T00:13:40.293Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:24:49.872 00:24:49.872 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89807 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:49.872 00:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:49.872 [2024-11-21 00:13:40.115829] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:49.872 [2024-11-21 00:13:40.115946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90430 ] 00:24:49.872 [2024-11-21 00:13:40.247962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.872 [2024-11-21 00:13:40.288493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.130 [2024-11-21 00:13:40.386875] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:50.130 [2024-11-21 00:13:40.386938] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:50.130 [2024-11-21 00:13:40.449386] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:50.130 [2024-11-21 00:13:40.449985] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:50.130 [2024-11-21 00:13:40.450586] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:50.698 [2024-11-21 00:13:40.903126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.698 [2024-11-21 00:13:40.903164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:50.698 [2024-11-21 00:13:40.903176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:50.698 [2024-11-21 00:13:40.903187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.698 [2024-11-21 00:13:40.903225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.698 [2024-11-21 00:13:40.903236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.698 [2024-11-21 00:13:40.903244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:50.698 [2024-11-21 00:13:40.903253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.698 [2024-11-21 00:13:40.903267] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:50.698 [2024-11-21 00:13:40.903473] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:50.698 [2024-11-21 00:13:40.903485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.698 [2024-11-21 00:13:40.903491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.698 [2024-11-21 00:13:40.903498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:24:50.698 [2024-11-21 00:13:40.903504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.698 [2024-11-21 00:13:40.904757] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:50.698 [2024-11-21 00:13:40.907460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.698 [2024-11-21 00:13:40.907492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:50.698 [2024-11-21 00:13:40.907500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.705 ms 00:24:50.698 [2024-11-21 00:13:40.907507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.698 [2024-11-21 00:13:40.907554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.699 [2024-11-21 00:13:40.907561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:50.699 [2024-11-21 00:13:40.907568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:50.699 [2024-11-21 00:13:40.907573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.699 [2024-11-21 00:13:40.913789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.699 [2024-11-21 00:13:40.913817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.699 [2024-11-21 00:13:40.913830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.167 ms 00:24:50.699 [2024-11-21 00:13:40.913836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.699 [2024-11-21 00:13:40.913903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.699 [2024-11-21 00:13:40.913911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.699 [2024-11-21 00:13:40.913917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:50.699 [2024-11-21 00:13:40.913923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.699 [2024-11-21 00:13:40.913955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.699 [2024-11-21 00:13:40.913969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:50.699 [2024-11-21 00:13:40.913976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:50.699 [2024-11-21 00:13:40.913986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.699 [2024-11-21 00:13:40.914005] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:50.699 [2024-11-21 00:13:40.915559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.699 [2024-11-21 00:13:40.915580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.699 [2024-11-21 00:13:40.915587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:24:50.699 [2024-11-21 00:13:40.915593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.699 [2024-11-21 00:13:40.915618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.699 [2024-11-21 00:13:40.915628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:50.699 [2024-11-21 00:13:40.915635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:50.699 [2024-11-21 00:13:40.915640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.699 [2024-11-21 00:13:40.915656] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:50.699 [2024-11-21 00:13:40.915672] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:50.699 [2024-11-21 00:13:40.915706] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:50.699 [2024-11-21 00:13:40.915722] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:50.699 [2024-11-21 00:13:40.915807] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:50.699 [2024-11-21 00:13:40.915816] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:50.699 [2024-11-21 00:13:40.915826] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:50.699 [2024-11-21 00:13:40.915834] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:50.699 [2024-11-21 00:13:40.915841] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:50.699 [2024-11-21 00:13:40.915847] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:50.699 [2024-11-21 00:13:40.915854] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:50.699 [2024-11-21 00:13:40.915860] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:50.699 [2024-11-21 00:13:40.915866] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:50.699 [2024-11-21 00:13:40.915872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.699 [2024-11-21 00:13:40.915880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:50.699 [2024-11-21 00:13:40.915889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:24:50.699 [2024-11-21 00:13:40.915895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.699 [2024-11-21 00:13:40.915959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.699 [2024-11-21 00:13:40.915969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:50.699 [2024-11-21 00:13:40.915975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:50.699 [2024-11-21 00:13:40.915983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.699 [2024-11-21 00:13:40.916060] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:50.699 [2024-11-21 00:13:40.916068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:50.699 [2024-11-21 00:13:40.916079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:50.699 [2024-11-21 00:13:40.916086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.699 [2024-11-21 00:13:40.916092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:50.699 [2024-11-21 00:13:40.916098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:50.699 [2024-11-21 00:13:40.916103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:50.699 [2024-11-21 00:13:40.916111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:50.699 [2024-11-21 00:13:40.916117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:50.699 [2024-11-21 00:13:40.916123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:50.699 [2024-11-21 00:13:40.916129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:50.699 [2024-11-21 00:13:40.916134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:50.699 [2024-11-21 00:13:40.916139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:50.699 [2024-11-21 00:13:40.916145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:50.699 [2024-11-21 00:13:40.916151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:50.699 [2024-11-21 00:13:40.916156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.699 [2024-11-21 00:13:40.916167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:50.699 [2024-11-21 00:13:40.916172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:50.699 [2024-11-21 00:13:40.916178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.699 [2024-11-21 00:13:40.916183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:50.699 [2024-11-21 00:13:40.916189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:50.699 [2024-11-21 00:13:40.916194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.699 [2024-11-21 00:13:40.916199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:50.699 [2024-11-21 00:13:40.916204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:50.699 [2024-11-21 00:13:40.916210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.699 [2024-11-21 00:13:40.916216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:50.699 [2024-11-21 00:13:40.916221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:50.699 [2024-11-21 00:13:40.916227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.699 [2024-11-21 00:13:40.916234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:50.699 [2024-11-21 00:13:40.916240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:50.699 [2024-11-21 00:13:40.916245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:50.699 [2024-11-21 00:13:40.916252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:50.699 [2024-11-21 00:13:40.916259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:50.699 [2024-11-21 00:13:40.916265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:50.699 [2024-11-21 00:13:40.916271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:50.699 [2024-11-21 00:13:40.916277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:50.699 [2024-11-21 00:13:40.916283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:50.699 [2024-11-21 00:13:40.916289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:50.699 [2024-11-21 00:13:40.916309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:50.699 [2024-11-21 00:13:40.916315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.699 [2024-11-21 00:13:40.916322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:50.699 [2024-11-21 00:13:40.916328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:50.699 [2024-11-21 00:13:40.916334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.699 [2024-11-21 00:13:40.916340] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:50.699 [2024-11-21 00:13:40.916347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:50.699 [2024-11-21 00:13:40.916357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:50.699 [2024-11-21 00:13:40.916365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:50.699 [2024-11-21 00:13:40.916372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:50.699 [2024-11-21 00:13:40.916380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:50.699 [2024-11-21 00:13:40.916386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:50.699 [2024-11-21 00:13:40.916392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:50.699 [2024-11-21 00:13:40.916398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:50.699 [2024-11-21 00:13:40.916404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:50.699 [2024-11-21 00:13:40.916411] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:50.699 [2024-11-21 00:13:40.916420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:50.699 [2024-11-21 00:13:40.916431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:50.699 [2024-11-21 00:13:40.916437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:50.699 [2024-11-21 00:13:40.916443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:50.699 [2024-11-21 00:13:40.916449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:50.700 [2024-11-21 00:13:40.916456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:50.700 [2024-11-21 00:13:40.916467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:50.700 [2024-11-21 00:13:40.916474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:50.700 [2024-11-21 00:13:40.916480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:50.700 [2024-11-21 00:13:40.916487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:50.700 [2024-11-21 00:13:40.916494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:50.700 [2024-11-21 00:13:40.916505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:50.700 [2024-11-21 00:13:40.916511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:50.700 [2024-11-21 00:13:40.916517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:50.700 [2024-11-21 00:13:40.916524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:50.700 [2024-11-21 00:13:40.916530] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:50.700 [2024-11-21 00:13:40.916545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:50.700 [2024-11-21 00:13:40.916553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:50.700 [2024-11-21 00:13:40.916559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:50.700 [2024-11-21 00:13:40.916566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:50.700 [2024-11-21 00:13:40.916572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:50.700 [2024-11-21 00:13:40.916579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.916588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:50.700 [2024-11-21 00:13:40.916594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:24:50.700 [2024-11-21 00:13:40.916603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:40.943316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.943345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:50.700 [2024-11-21 00:13:40.943354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.678 ms 00:24:50.700 [2024-11-21 00:13:40.943363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:40.943429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.943436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:50.700 [2024-11-21 00:13:40.943447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:50.700 [2024-11-21 00:13:40.943453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:40.952508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.952546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:50.700 [2024-11-21 00:13:40.952555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.013 ms 00:24:50.700 [2024-11-21 00:13:40.952561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:40.952584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.952595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:50.700 [2024-11-21 00:13:40.952602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:50.700 [2024-11-21 00:13:40.952608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:40.953002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.953016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:50.700 [2024-11-21 00:13:40.953023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:24:50.700 [2024-11-21 00:13:40.953035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:40.953145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.953155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:50.700 [2024-11-21 00:13:40.953165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:24:50.700 [2024-11-21 00:13:40.953172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:40.958542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.958705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:50.700 [2024-11-21 00:13:40.958716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.352 ms 00:24:50.700 [2024-11-21 00:13:40.958729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:40.963426] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:50.700 [2024-11-21 00:13:40.963456] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:50.700 [2024-11-21 00:13:40.963465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.963472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:50.700 [2024-11-21 00:13:40.963479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.656 ms 00:24:50.700 [2024-11-21 00:13:40.963485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:40.975176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.975278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:50.700 [2024-11-21 00:13:40.975292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.663 ms 00:24:50.700 [2024-11-21 00:13:40.975319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:40.977093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.977118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:50.700 [2024-11-21 00:13:40.977126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.738 ms 00:24:50.700 [2024-11-21 00:13:40.977132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:40.978785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.978808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:50.700 [2024-11-21 00:13:40.978815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:24:50.700 [2024-11-21 00:13:40.978820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:40.979068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.979081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:50.700 [2024-11-21 00:13:40.979093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:24:50.700 [2024-11-21 00:13:40.979099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:40.997094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:40.997130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:50.700 [2024-11-21 00:13:40.997139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.979 ms 00:24:50.700 [2024-11-21 00:13:40.997146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:41.002925] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:50.700 [2024-11-21 00:13:41.005173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:41.005278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:50.700 [2024-11-21 00:13:41.005290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.997 ms 00:24:50.700 [2024-11-21 00:13:41.005305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:41.005373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:41.005382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:50.700 [2024-11-21 00:13:41.005393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:50.700 [2024-11-21 00:13:41.005399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:41.005463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:41.005472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:50.700 [2024-11-21 00:13:41.005478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:50.700 [2024-11-21 00:13:41.005485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:41.005506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:41.005513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:50.700 [2024-11-21 00:13:41.005520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:50.700 [2024-11-21 00:13:41.005526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:41.005556] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:50.700 [2024-11-21 00:13:41.005566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:41.005573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:50.700 [2024-11-21 00:13:41.005582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:50.700 [2024-11-21 00:13:41.005588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:41.009262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:41.009290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:50.700 [2024-11-21 00:13:41.009325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.660 ms 00:24:50.700 [2024-11-21 00:13:41.009332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:41.009388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.700 [2024-11-21 00:13:41.009398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:50.700 [2024-11-21 00:13:41.009405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:50.700 [2024-11-21 00:13:41.009412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.700 [2024-11-21 00:13:41.010258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.769 ms, result 0 00:24:51.636  [2024-11-21T00:13:43.431Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-21T00:13:44.366Z] Copying: 38/1024 [MB] (20 MBps) [2024-11-21T00:13:45.308Z] Copying: 57/1024 [MB] (18 MBps) [2024-11-21T00:13:46.242Z] Copying: 68/1024 [MB] (11 MBps) [2024-11-21T00:13:47.179Z] Copying: 79/1024 [MB] (11 MBps) [2024-11-21T00:13:48.122Z] Copying: 92/1024 [MB] (12 MBps) [2024-11-21T00:13:49.058Z] Copying: 103/1024 [MB] (11 MBps) [2024-11-21T00:13:50.440Z] Copying: 114/1024 [MB] (10 MBps) [2024-11-21T00:13:51.375Z] Copying: 125/1024 [MB] (10 MBps) [2024-11-21T00:13:52.309Z] Copying: 136/1024 [MB] (10 MBps) [2024-11-21T00:13:53.243Z] Copying: 147/1024 [MB] (11 MBps) [2024-11-21T00:13:54.184Z] Copying: 158/1024 [MB] (11 MBps) [2024-11-21T00:13:55.124Z] Copying: 174/1024 [MB] (15 MBps) [2024-11-21T00:13:56.059Z] Copying: 185/1024 [MB] (10 MBps) [2024-11-21T00:13:57.435Z] Copying: 196/1024 [MB] (11 MBps) [2024-11-21T00:13:58.028Z] Copying: 208/1024 [MB] (11 MBps) [2024-11-21T00:13:59.404Z] Copying: 221/1024 [MB] (13 MBps) [2024-11-21T00:14:00.343Z] Copying: 232/1024 [MB] (11 MBps) [2024-11-21T00:14:01.288Z] Copying: 245/1024 [MB] (13 MBps) [2024-11-21T00:14:02.225Z] Copying: 255/1024 [MB] (10 MBps) [2024-11-21T00:14:03.160Z] Copying: 266/1024 [MB] (10 MBps) [2024-11-21T00:14:04.095Z] Copying: 278/1024 [MB] (11 MBps) [2024-11-21T00:14:05.035Z] Copying: 290/1024 [MB] (11 MBps) [2024-11-21T00:14:06.413Z] Copying: 301/1024 [MB] (10 MBps) [2024-11-21T00:14:07.348Z] Copying: 312/1024 [MB] (11 MBps) [2024-11-21T00:14:08.291Z] Copying: 327/1024 [MB] (14 MBps) [2024-11-21T00:14:09.233Z] Copying: 337/1024 [MB] (10 MBps) [2024-11-21T00:14:10.167Z] Copying: 348/1024 [MB] (10 MBps) [2024-11-21T00:14:11.100Z] Copying: 364/1024 [MB] (16 MBps) [2024-11-21T00:14:12.034Z] Copying: 376/1024 [MB] (11 MBps) [2024-11-21T00:14:13.418Z] Copying: 390/1024 [MB] (13 MBps) [2024-11-21T00:14:14.356Z] Copying: 400/1024 [MB] (10 MBps) [2024-11-21T00:14:15.290Z] Copying: 413/1024 [MB] (12 MBps) [2024-11-21T00:14:16.226Z] Copying: 425/1024 [MB] (11 MBps) [2024-11-21T00:14:17.164Z] Copying: 437/1024 [MB] (11 MBps) [2024-11-21T00:14:18.109Z] Copying: 454/1024 [MB] (17 MBps) [2024-11-21T00:14:19.051Z] Copying: 465/1024 [MB] (10 MBps) [2024-11-21T00:14:20.441Z] Copying: 486764/1048576 [kB] (10220 kBps) [2024-11-21T00:14:21.375Z] Copying: 485/1024 [MB] (10 MBps) [2024-11-21T00:14:22.315Z] Copying: 496/1024 [MB] (11 MBps) [2024-11-21T00:14:23.260Z] Copying: 507/1024 [MB] (11 MBps) [2024-11-21T00:14:24.201Z] Copying: 518/1024 [MB] (10 MBps) [2024-11-21T00:14:25.145Z] Copying: 530/1024 [MB] (12 MBps) [2024-11-21T00:14:26.091Z] Copying: 541/1024 [MB] (11 MBps) [2024-11-21T00:14:27.026Z] Copying: 564528/1048576 [kB] (10100 kBps) [2024-11-21T00:14:28.401Z] Copying: 565/1024 [MB] (14 MBps) [2024-11-21T00:14:29.336Z] Copying: 577/1024 [MB] (11 MBps) [2024-11-21T00:14:30.270Z] Copying: 588/1024 [MB] (11 MBps) [2024-11-21T00:14:31.295Z] Copying: 600/1024 [MB] (11 MBps) [2024-11-21T00:14:32.240Z] Copying: 611/1024 [MB] (11 MBps) [2024-11-21T00:14:33.179Z] Copying: 621/1024 [MB] (10 MBps) [2024-11-21T00:14:34.114Z] Copying: 634/1024 [MB] (13 MBps) [2024-11-21T00:14:35.053Z] Copying: 652/1024 [MB] (17 MBps) [2024-11-21T00:14:36.427Z] Copying: 663/1024 [MB] (10 MBps) [2024-11-21T00:14:37.362Z] Copying: 677/1024 [MB] (14 MBps) [2024-11-21T00:14:38.297Z] Copying: 688/1024 [MB] (11 MBps) [2024-11-21T00:14:39.232Z] Copying: 700/1024 [MB] (11 MBps) [2024-11-21T00:14:40.165Z] Copying: 715/1024 [MB] (15 MBps) [2024-11-21T00:14:41.104Z] Copying: 727/1024 [MB] (11 MBps) [2024-11-21T00:14:42.045Z] Copying: 741/1024 [MB] (14 MBps) [2024-11-21T00:14:43.419Z] Copying: 753/1024 [MB] (11 MBps) [2024-11-21T00:14:44.354Z] Copying: 763/1024 [MB] (10 MBps) [2024-11-21T00:14:45.290Z] Copying: 775/1024 [MB] (11 MBps) [2024-11-21T00:14:46.225Z] Copying: 787/1024 [MB] (11 MBps) [2024-11-21T00:14:47.160Z] Copying: 798/1024 [MB] (11 MBps) [2024-11-21T00:14:48.094Z] Copying: 810/1024 [MB] (11 MBps) [2024-11-21T00:14:49.030Z] Copying: 822/1024 [MB] (12 MBps) [2024-11-21T00:14:50.404Z] Copying: 834/1024 [MB] (11 MBps) [2024-11-21T00:14:51.339Z] Copying: 846/1024 [MB] (11 MBps) [2024-11-21T00:14:52.276Z] Copying: 857/1024 [MB] (11 MBps) [2024-11-21T00:14:53.217Z] Copying: 869/1024 [MB] (11 MBps) [2024-11-21T00:14:54.153Z] Copying: 880/1024 [MB] (10 MBps) [2024-11-21T00:14:55.089Z] Copying: 891/1024 [MB] (11 MBps) [2024-11-21T00:14:56.464Z] Copying: 903/1024 [MB] (11 MBps) [2024-11-21T00:14:57.036Z] Copying: 914/1024 [MB] (11 MBps) [2024-11-21T00:14:58.417Z] Copying: 925/1024 [MB] (10 MBps) [2024-11-21T00:14:59.352Z] Copying: 936/1024 [MB] (10 MBps) [2024-11-21T00:15:00.287Z] Copying: 947/1024 [MB] (11 MBps) [2024-11-21T00:15:01.222Z] Copying: 959/1024 [MB] (11 MBps) [2024-11-21T00:15:02.250Z] Copying: 971/1024 [MB] (11 MBps) [2024-11-21T00:15:03.184Z] Copying: 982/1024 [MB] (11 MBps) [2024-11-21T00:15:04.122Z] Copying: 994/1024 [MB] (11 MBps) [2024-11-21T00:15:05.060Z] Copying: 1004/1024 [MB] (10 MBps) [2024-11-21T00:15:06.435Z] Copying: 1015/1024 [MB] (10 MBps) [2024-11-21T00:15:06.435Z] Copying: 1048128/1048576 [kB] (8476 kBps) [2024-11-21T00:15:06.435Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-21 00:15:06.301342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.014 [2024-11-21 00:15:06.301402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:16.014 [2024-11-21 00:15:06.301415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:16.014 [2024-11-21 00:15:06.301422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.014 [2024-11-21 00:15:06.302900] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:16.014 [2024-11-21 00:15:06.304124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.014 [2024-11-21 00:15:06.304152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:16.014 [2024-11-21 00:15:06.304161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:26:16.014 [2024-11-21 00:15:06.304171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.014 [2024-11-21 00:15:06.315003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.014 [2024-11-21 00:15:06.315030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:16.014 [2024-11-21 00:15:06.315039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.544 ms 00:26:16.014 [2024-11-21 00:15:06.315045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.014 [2024-11-21 00:15:06.332917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.014 [2024-11-21 00:15:06.332944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:16.014 [2024-11-21 00:15:06.332958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.860 ms 00:26:16.014 [2024-11-21 00:15:06.332964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.014 [2024-11-21 00:15:06.337613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.014 [2024-11-21 00:15:06.337634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:16.014 [2024-11-21 00:15:06.337642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.627 ms 00:26:16.014 [2024-11-21 00:15:06.337653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.014 [2024-11-21 00:15:06.339790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.014 [2024-11-21 00:15:06.339815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:16.014 [2024-11-21 00:15:06.339821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.097 ms 00:26:16.014 [2024-11-21 00:15:06.339827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.015 [2024-11-21 00:15:06.343446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.015 [2024-11-21 00:15:06.343586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:16.015 [2024-11-21 00:15:06.343604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.596 ms 00:26:16.015 [2024-11-21 00:15:06.343613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.277 [2024-11-21 00:15:06.564560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.277 [2024-11-21 00:15:06.564589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:16.277 [2024-11-21 00:15:06.564598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 220.920 ms 00:26:16.277 [2024-11-21 00:15:06.564604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.277 [2024-11-21 00:15:06.567156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.277 [2024-11-21 00:15:06.567261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:16.277 [2024-11-21 00:15:06.567272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.540 ms 00:26:16.277 [2024-11-21 00:15:06.567278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.277 [2024-11-21 00:15:06.568964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.277 [2024-11-21 00:15:06.568988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:16.277 [2024-11-21 00:15:06.568995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:26:16.277 [2024-11-21 00:15:06.569001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.277 [2024-11-21 00:15:06.570570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.277 [2024-11-21 00:15:06.570595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:16.277 [2024-11-21 00:15:06.570602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:26:16.277 [2024-11-21 00:15:06.570607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.277 [2024-11-21 00:15:06.572315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.277 [2024-11-21 00:15:06.572337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:16.277 [2024-11-21 00:15:06.572344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.668 ms 00:26:16.277 [2024-11-21 00:15:06.572350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.277 [2024-11-21 00:15:06.572372] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:16.277 [2024-11-21 00:15:06.572382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 89088 / 261120 wr_cnt: 1 state: open 00:26:16.277 [2024-11-21 00:15:06.572390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:16.277 [2024-11-21 00:15:06.572783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:16.278 [2024-11-21 00:15:06.572987] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:16.278 [2024-11-21 00:15:06.572996] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f1f9902c-6dc4-4138-98d5-da0a85f2c424 00:26:16.278 [2024-11-21 00:15:06.573005] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 89088 00:26:16.278 [2024-11-21 00:15:06.573011] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 90048 00:26:16.278 [2024-11-21 00:15:06.573016] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 89088 00:26:16.278 [2024-11-21 00:15:06.573025] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0108 00:26:16.278 [2024-11-21 00:15:06.573034] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:16.278 [2024-11-21 00:15:06.573040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:16.278 [2024-11-21 00:15:06.573045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:16.278 [2024-11-21 00:15:06.573050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:16.278 [2024-11-21 00:15:06.573055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:16.278 [2024-11-21 00:15:06.573060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.278 [2024-11-21 00:15:06.573066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:16.278 [2024-11-21 00:15:06.573072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:26:16.278 [2024-11-21 00:15:06.573080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.574793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.278 [2024-11-21 00:15:06.574811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:16.278 [2024-11-21 00:15:06.574818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:26:16.278 [2024-11-21 00:15:06.574825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.574909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.278 [2024-11-21 00:15:06.574916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:16.278 [2024-11-21 00:15:06.574926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:16.278 [2024-11-21 00:15:06.574932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.579992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.278 [2024-11-21 00:15:06.580016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:16.278 [2024-11-21 00:15:06.580025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.278 [2024-11-21 00:15:06.580031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.580073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.278 [2024-11-21 00:15:06.580080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:16.278 [2024-11-21 00:15:06.580092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.278 [2024-11-21 00:15:06.580100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.580141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.278 [2024-11-21 00:15:06.580149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:16.278 [2024-11-21 00:15:06.580156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.278 [2024-11-21 00:15:06.580161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.580173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.278 [2024-11-21 00:15:06.580179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:16.278 [2024-11-21 00:15:06.580185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.278 [2024-11-21 00:15:06.580193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.590533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.278 [2024-11-21 00:15:06.590568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:16.278 [2024-11-21 00:15:06.590577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.278 [2024-11-21 00:15:06.590583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.598918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.278 [2024-11-21 00:15:06.598953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:16.278 [2024-11-21 00:15:06.598966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.278 [2024-11-21 00:15:06.598973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.599011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.278 [2024-11-21 00:15:06.599018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:16.278 [2024-11-21 00:15:06.599025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.278 [2024-11-21 00:15:06.599031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.599052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.278 [2024-11-21 00:15:06.599059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:16.278 [2024-11-21 00:15:06.599065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.278 [2024-11-21 00:15:06.599070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.599133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.278 [2024-11-21 00:15:06.599146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:16.278 [2024-11-21 00:15:06.599153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.278 [2024-11-21 00:15:06.599158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.599183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.278 [2024-11-21 00:15:06.599190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:16.278 [2024-11-21 00:15:06.599196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.278 [2024-11-21 00:15:06.599202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.599238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.278 [2024-11-21 00:15:06.599246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:16.278 [2024-11-21 00:15:06.599252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.278 [2024-11-21 00:15:06.599258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.278 [2024-11-21 00:15:06.599315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.278 [2024-11-21 00:15:06.599325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:16.279 [2024-11-21 00:15:06.599335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.279 [2024-11-21 00:15:06.599341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.279 [2024-11-21 00:15:06.599470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 298.708 ms, result 0 00:26:16.849 00:26:16.849 00:26:16.849 00:15:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:18.751 00:15:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:18.751 [2024-11-21 00:15:08.932042] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:18.751 [2024-11-21 00:15:08.932335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91335 ] 00:26:18.751 [2024-11-21 00:15:09.065895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.751 [2024-11-21 00:15:09.107770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.009 [2024-11-21 00:15:09.206606] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:19.009 [2024-11-21 00:15:09.206663] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:19.009 [2024-11-21 00:15:09.360935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.009 [2024-11-21 00:15:09.360971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:19.010 [2024-11-21 00:15:09.360985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:19.010 [2024-11-21 00:15:09.360991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.010 [2024-11-21 00:15:09.361028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.010 [2024-11-21 00:15:09.361039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:19.010 [2024-11-21 00:15:09.361045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:19.010 [2024-11-21 00:15:09.361055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.010 [2024-11-21 00:15:09.361069] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:19.010 [2024-11-21 00:15:09.361256] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:19.010 [2024-11-21 00:15:09.361268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.010 [2024-11-21 00:15:09.361275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:19.010 [2024-11-21 00:15:09.361281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:26:19.010 [2024-11-21 00:15:09.361291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.010 [2024-11-21 00:15:09.362531] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:19.010 [2024-11-21 00:15:09.365339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.010 [2024-11-21 00:15:09.365367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:19.010 [2024-11-21 00:15:09.365375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.809 ms 00:26:19.010 [2024-11-21 00:15:09.365381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.010 [2024-11-21 00:15:09.365431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.010 [2024-11-21 00:15:09.365440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:19.010 [2024-11-21 00:15:09.365446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:19.010 [2024-11-21 00:15:09.365452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.010 [2024-11-21 00:15:09.371696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.010 [2024-11-21 00:15:09.371841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:19.010 [2024-11-21 00:15:09.371854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.202 ms 00:26:19.010 [2024-11-21 00:15:09.371860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.010 [2024-11-21 00:15:09.371938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.010 [2024-11-21 00:15:09.371945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:19.010 [2024-11-21 00:15:09.371952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:19.010 [2024-11-21 00:15:09.371960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.010 [2024-11-21 00:15:09.371997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.010 [2024-11-21 00:15:09.372005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:19.010 [2024-11-21 00:15:09.372012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:19.010 [2024-11-21 00:15:09.372018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.010 [2024-11-21 00:15:09.372036] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:19.010 [2024-11-21 00:15:09.373606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.010 [2024-11-21 00:15:09.373629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:19.010 [2024-11-21 00:15:09.373637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:26:19.010 [2024-11-21 00:15:09.373643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.010 [2024-11-21 00:15:09.373668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.010 [2024-11-21 00:15:09.373679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:19.010 [2024-11-21 00:15:09.373686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:19.010 [2024-11-21 00:15:09.373695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.010 [2024-11-21 00:15:09.373713] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:19.010 [2024-11-21 00:15:09.373730] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:19.010 [2024-11-21 00:15:09.373762] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:19.010 [2024-11-21 00:15:09.373774] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:19.010 [2024-11-21 00:15:09.373858] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:19.010 [2024-11-21 00:15:09.373866] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:19.010 [2024-11-21 00:15:09.373878] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:19.010 [2024-11-21 00:15:09.373885] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:19.010 [2024-11-21 00:15:09.373895] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:19.010 [2024-11-21 00:15:09.373901] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:19.010 [2024-11-21 00:15:09.373908] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:19.010 [2024-11-21 00:15:09.373915] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:19.010 [2024-11-21 00:15:09.373921] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:19.010 [2024-11-21 00:15:09.373927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.010 [2024-11-21 00:15:09.373933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:19.010 [2024-11-21 00:15:09.373939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:26:19.010 [2024-11-21 00:15:09.373946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.010 [2024-11-21 00:15:09.374010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.010 [2024-11-21 00:15:09.374016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:19.010 [2024-11-21 00:15:09.374024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:19.010 [2024-11-21 00:15:09.374029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.010 [2024-11-21 00:15:09.374111] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:19.010 [2024-11-21 00:15:09.374124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:19.010 [2024-11-21 00:15:09.374135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:19.010 [2024-11-21 00:15:09.374141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.010 [2024-11-21 00:15:09.374147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:19.010 [2024-11-21 00:15:09.374152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:19.010 [2024-11-21 00:15:09.374158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:19.010 [2024-11-21 00:15:09.374164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:19.010 [2024-11-21 00:15:09.374170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:19.010 [2024-11-21 00:15:09.374175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:19.010 [2024-11-21 00:15:09.374183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:19.010 [2024-11-21 00:15:09.374188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:19.010 [2024-11-21 00:15:09.374194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:19.010 [2024-11-21 00:15:09.374199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:19.010 [2024-11-21 00:15:09.374204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:19.010 [2024-11-21 00:15:09.374209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.010 [2024-11-21 00:15:09.374214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:19.010 [2024-11-21 00:15:09.374220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:19.010 [2024-11-21 00:15:09.374225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.010 [2024-11-21 00:15:09.374230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:19.010 [2024-11-21 00:15:09.374235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:19.010 [2024-11-21 00:15:09.374241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.010 [2024-11-21 00:15:09.374245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:19.010 [2024-11-21 00:15:09.374250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:19.010 [2024-11-21 00:15:09.374256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.010 [2024-11-21 00:15:09.374261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:19.010 [2024-11-21 00:15:09.374273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:19.010 [2024-11-21 00:15:09.374279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.010 [2024-11-21 00:15:09.374285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:19.010 [2024-11-21 00:15:09.374291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:19.010 [2024-11-21 00:15:09.374322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.010 [2024-11-21 00:15:09.374328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:19.010 [2024-11-21 00:15:09.374334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:19.010 [2024-11-21 00:15:09.374340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:19.010 [2024-11-21 00:15:09.374346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:19.010 [2024-11-21 00:15:09.374352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:19.010 [2024-11-21 00:15:09.374357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:19.010 [2024-11-21 00:15:09.374366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:19.010 [2024-11-21 00:15:09.374373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:19.010 [2024-11-21 00:15:09.374379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.010 [2024-11-21 00:15:09.374385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:19.010 [2024-11-21 00:15:09.374392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:19.010 [2024-11-21 00:15:09.374401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.010 [2024-11-21 00:15:09.374407] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:19.010 [2024-11-21 00:15:09.374414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:19.010 [2024-11-21 00:15:09.374421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:19.010 [2024-11-21 00:15:09.374430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.010 [2024-11-21 00:15:09.374438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:19.011 [2024-11-21 00:15:09.374444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:19.011 [2024-11-21 00:15:09.374450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:19.011 [2024-11-21 00:15:09.374456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:19.011 [2024-11-21 00:15:09.374461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:19.011 [2024-11-21 00:15:09.374467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:19.011 [2024-11-21 00:15:09.374474] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:19.011 [2024-11-21 00:15:09.374483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:19.011 [2024-11-21 00:15:09.374490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:19.011 [2024-11-21 00:15:09.374496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:19.011 [2024-11-21 00:15:09.374503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:19.011 [2024-11-21 00:15:09.374512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:19.011 [2024-11-21 00:15:09.374518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:19.011 [2024-11-21 00:15:09.374524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:19.011 [2024-11-21 00:15:09.374531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:19.011 [2024-11-21 00:15:09.374537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:19.011 [2024-11-21 00:15:09.374545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:19.011 [2024-11-21 00:15:09.374556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:19.011 [2024-11-21 00:15:09.374562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:19.011 [2024-11-21 00:15:09.374569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:19.011 [2024-11-21 00:15:09.374575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:19.011 [2024-11-21 00:15:09.374582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:19.011 [2024-11-21 00:15:09.374589] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:19.011 [2024-11-21 00:15:09.374597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:19.011 [2024-11-21 00:15:09.374604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:19.011 [2024-11-21 00:15:09.374610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:19.011 [2024-11-21 00:15:09.374617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:19.011 [2024-11-21 00:15:09.374626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:19.011 [2024-11-21 00:15:09.374633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.011 [2024-11-21 00:15:09.374642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:19.011 [2024-11-21 00:15:09.374648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:26:19.011 [2024-11-21 00:15:09.374653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.011 [2024-11-21 00:15:09.394104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.011 [2024-11-21 00:15:09.394223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:19.011 [2024-11-21 00:15:09.394269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.412 ms 00:26:19.011 [2024-11-21 00:15:09.394303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.011 [2024-11-21 00:15:09.394383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.011 [2024-11-21 00:15:09.394401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:19.011 [2024-11-21 00:15:09.394421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:19.011 [2024-11-21 00:15:09.394435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.011 [2024-11-21 00:15:09.405067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.011 [2024-11-21 00:15:09.405190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:19.011 [2024-11-21 00:15:09.405247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.537 ms 00:26:19.011 [2024-11-21 00:15:09.405274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.011 [2024-11-21 00:15:09.405339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.011 [2024-11-21 00:15:09.405368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:19.011 [2024-11-21 00:15:09.405400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:19.011 [2024-11-21 00:15:09.405425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.011 [2024-11-21 00:15:09.405886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.011 [2024-11-21 00:15:09.405994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:19.011 [2024-11-21 00:15:09.406055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:26:19.011 [2024-11-21 00:15:09.406081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.011 [2024-11-21 00:15:09.406253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.011 [2024-11-21 00:15:09.406281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:19.011 [2024-11-21 00:15:09.406325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:26:19.011 [2024-11-21 00:15:09.406350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.011 [2024-11-21 00:15:09.412096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.011 [2024-11-21 00:15:09.412184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:19.011 [2024-11-21 00:15:09.412229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.709 ms 00:26:19.011 [2024-11-21 00:15:09.412251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.011 [2024-11-21 00:15:09.414968] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:19.011 [2024-11-21 00:15:09.415070] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:19.011 [2024-11-21 00:15:09.415117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.011 [2024-11-21 00:15:09.415134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:19.011 [2024-11-21 00:15:09.415149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.769 ms 00:26:19.011 [2024-11-21 00:15:09.415164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.011 [2024-11-21 00:15:09.426841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.011 [2024-11-21 00:15:09.426931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:19.271 [2024-11-21 00:15:09.426975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.643 ms 00:26:19.271 [2024-11-21 00:15:09.426993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.271 [2024-11-21 00:15:09.429176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.271 [2024-11-21 00:15:09.429284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:19.271 [2024-11-21 00:15:09.429343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:26:19.271 [2024-11-21 00:15:09.429353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.271 [2024-11-21 00:15:09.430832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.271 [2024-11-21 00:15:09.430854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:19.271 [2024-11-21 00:15:09.430861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:26:19.271 [2024-11-21 00:15:09.430873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.271 [2024-11-21 00:15:09.431115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.271 [2024-11-21 00:15:09.431125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:19.271 [2024-11-21 00:15:09.431133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:26:19.271 [2024-11-21 00:15:09.431139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.271 [2024-11-21 00:15:09.448618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.271 [2024-11-21 00:15:09.448730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:19.271 [2024-11-21 00:15:09.448747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.467 ms 00:26:19.271 [2024-11-21 00:15:09.448754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.271 [2024-11-21 00:15:09.455135] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:19.271 [2024-11-21 00:15:09.457495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.271 [2024-11-21 00:15:09.457519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:19.271 [2024-11-21 00:15:09.457534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.715 ms 00:26:19.271 [2024-11-21 00:15:09.457543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.271 [2024-11-21 00:15:09.457581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.271 [2024-11-21 00:15:09.457589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:19.271 [2024-11-21 00:15:09.457595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:19.271 [2024-11-21 00:15:09.457601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.271 [2024-11-21 00:15:09.458892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.271 [2024-11-21 00:15:09.458988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:19.271 [2024-11-21 00:15:09.459000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:26:19.271 [2024-11-21 00:15:09.459010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.271 [2024-11-21 00:15:09.459034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.271 [2024-11-21 00:15:09.459046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:19.271 [2024-11-21 00:15:09.459053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:19.271 [2024-11-21 00:15:09.459059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.271 [2024-11-21 00:15:09.459091] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:19.271 [2024-11-21 00:15:09.459100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.271 [2024-11-21 00:15:09.459106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:19.271 [2024-11-21 00:15:09.459113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:19.271 [2024-11-21 00:15:09.459118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.271 [2024-11-21 00:15:09.462449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.271 [2024-11-21 00:15:09.462477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:19.271 [2024-11-21 00:15:09.462486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.314 ms 00:26:19.271 [2024-11-21 00:15:09.462493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.271 [2024-11-21 00:15:09.462551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.271 [2024-11-21 00:15:09.462559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:19.271 [2024-11-21 00:15:09.462565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:19.271 [2024-11-21 00:15:09.462571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.271 [2024-11-21 00:15:09.463424] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.110 ms, result 0 00:26:20.216  [2024-11-21T00:15:12.026Z] Copying: 1024/1048576 [kB] (1024 kBps) [2024-11-21T00:15:12.962Z] Copying: 3752/1048576 [kB] (2728 kBps) [2024-11-21T00:15:13.918Z] Copying: 13576/1048576 [kB] (9824 kBps) [2024-11-21T00:15:14.856Z] Copying: 30/1024 [MB] (17 MBps) [2024-11-21T00:15:15.793Z] Copying: 46/1024 [MB] (16 MBps) [2024-11-21T00:15:16.737Z] Copying: 64/1024 [MB] (17 MBps) [2024-11-21T00:15:17.674Z] Copying: 81/1024 [MB] (16 MBps) [2024-11-21T00:15:18.609Z] Copying: 97/1024 [MB] (16 MBps) [2024-11-21T00:15:19.984Z] Copying: 115/1024 [MB] (17 MBps) [2024-11-21T00:15:20.919Z] Copying: 133/1024 [MB] (17 MBps) [2024-11-21T00:15:21.855Z] Copying: 150/1024 [MB] (17 MBps) [2024-11-21T00:15:22.791Z] Copying: 169/1024 [MB] (18 MBps) [2024-11-21T00:15:23.733Z] Copying: 187/1024 [MB] (18 MBps) [2024-11-21T00:15:24.667Z] Copying: 203/1024 [MB] (16 MBps) [2024-11-21T00:15:26.042Z] Copying: 221/1024 [MB] (18 MBps) [2024-11-21T00:15:26.976Z] Copying: 239/1024 [MB] (17 MBps) [2024-11-21T00:15:27.911Z] Copying: 257/1024 [MB] (18 MBps) [2024-11-21T00:15:28.846Z] Copying: 275/1024 [MB] (18 MBps) [2024-11-21T00:15:29.784Z] Copying: 293/1024 [MB] (17 MBps) [2024-11-21T00:15:30.717Z] Copying: 311/1024 [MB] (17 MBps) [2024-11-21T00:15:31.651Z] Copying: 327/1024 [MB] (16 MBps) [2024-11-21T00:15:33.069Z] Copying: 345/1024 [MB] (17 MBps) [2024-11-21T00:15:33.664Z] Copying: 364/1024 [MB] (18 MBps) [2024-11-21T00:15:35.040Z] Copying: 382/1024 [MB] (18 MBps) [2024-11-21T00:15:35.609Z] Copying: 400/1024 [MB] (17 MBps) [2024-11-21T00:15:36.984Z] Copying: 416/1024 [MB] (16 MBps) [2024-11-21T00:15:37.918Z] Copying: 434/1024 [MB] (17 MBps) [2024-11-21T00:15:38.857Z] Copying: 454/1024 [MB] (19 MBps) [2024-11-21T00:15:39.794Z] Copying: 472/1024 [MB] (17 MBps) [2024-11-21T00:15:40.729Z] Copying: 488/1024 [MB] (16 MBps) [2024-11-21T00:15:41.664Z] Copying: 505/1024 [MB] (17 MBps) [2024-11-21T00:15:43.039Z] Copying: 523/1024 [MB] (17 MBps) [2024-11-21T00:15:43.973Z] Copying: 542/1024 [MB] (18 MBps) [2024-11-21T00:15:44.906Z] Copying: 571/1024 [MB] (29 MBps) [2024-11-21T00:15:45.865Z] Copying: 594/1024 [MB] (22 MBps) [2024-11-21T00:15:46.798Z] Copying: 620/1024 [MB] (26 MBps) [2024-11-21T00:15:47.732Z] Copying: 640/1024 [MB] (19 MBps) [2024-11-21T00:15:48.666Z] Copying: 659/1024 [MB] (19 MBps) [2024-11-21T00:15:50.041Z] Copying: 677/1024 [MB] (18 MBps) [2024-11-21T00:15:50.612Z] Copying: 696/1024 [MB] (19 MBps) [2024-11-21T00:15:51.997Z] Copying: 714/1024 [MB] (17 MBps) [2024-11-21T00:15:52.936Z] Copying: 730/1024 [MB] (15 MBps) [2024-11-21T00:15:53.871Z] Copying: 752/1024 [MB] (22 MBps) [2024-11-21T00:15:54.807Z] Copying: 773/1024 [MB] (21 MBps) [2024-11-21T00:15:55.741Z] Copying: 793/1024 [MB] (19 MBps) [2024-11-21T00:15:56.676Z] Copying: 812/1024 [MB] (19 MBps) [2024-11-21T00:15:57.611Z] Copying: 836/1024 [MB] (23 MBps) [2024-11-21T00:15:58.991Z] Copying: 862/1024 [MB] (26 MBps) [2024-11-21T00:15:59.929Z] Copying: 881/1024 [MB] (18 MBps) [2024-11-21T00:16:00.866Z] Copying: 900/1024 [MB] (19 MBps) [2024-11-21T00:16:01.807Z] Copying: 917/1024 [MB] (17 MBps) [2024-11-21T00:16:02.747Z] Copying: 935/1024 [MB] (17 MBps) [2024-11-21T00:16:03.682Z] Copying: 951/1024 [MB] (15 MBps) [2024-11-21T00:16:04.618Z] Copying: 969/1024 [MB] (18 MBps) [2024-11-21T00:16:05.614Z] Copying: 996/1024 [MB] (26 MBps) [2024-11-21T00:16:05.872Z] Copying: 1016/1024 [MB] (20 MBps) [2024-11-21T00:16:06.441Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-21 00:16:06.252985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.020 [2024-11-21 00:16:06.253059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:16.020 [2024-11-21 00:16:06.253073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:16.020 [2024-11-21 00:16:06.253080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.020 [2024-11-21 00:16:06.253101] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:16.020 [2024-11-21 00:16:06.253744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.020 [2024-11-21 00:16:06.253766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:16.020 [2024-11-21 00:16:06.253780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:27:16.020 [2024-11-21 00:16:06.253787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.020 [2024-11-21 00:16:06.253982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.020 [2024-11-21 00:16:06.253991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:16.020 [2024-11-21 00:16:06.253999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:27:16.020 [2024-11-21 00:16:06.254007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.020 [2024-11-21 00:16:06.264730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.020 [2024-11-21 00:16:06.264844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:16.020 [2024-11-21 00:16:06.264895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.708 ms 00:27:16.020 [2024-11-21 00:16:06.264915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.020 [2024-11-21 00:16:06.269818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.020 [2024-11-21 00:16:06.269919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:16.020 [2024-11-21 00:16:06.269965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.851 ms 00:27:16.020 [2024-11-21 00:16:06.269983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.271529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.021 [2024-11-21 00:16:06.271629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:16.021 [2024-11-21 00:16:06.271676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:27:16.021 [2024-11-21 00:16:06.271693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.275418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.021 [2024-11-21 00:16:06.275522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:16.021 [2024-11-21 00:16:06.275566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.692 ms 00:27:16.021 [2024-11-21 00:16:06.275588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.277922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.021 [2024-11-21 00:16:06.278059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:16.021 [2024-11-21 00:16:06.278401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.298 ms 00:27:16.021 [2024-11-21 00:16:06.278439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.280500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.021 [2024-11-21 00:16:06.280597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:16.021 [2024-11-21 00:16:06.280662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:27:16.021 [2024-11-21 00:16:06.280743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.282229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.021 [2024-11-21 00:16:06.282326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:16.021 [2024-11-21 00:16:06.282338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:27:16.021 [2024-11-21 00:16:06.282344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.283590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.021 [2024-11-21 00:16:06.283613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:16.021 [2024-11-21 00:16:06.283620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.223 ms 00:27:16.021 [2024-11-21 00:16:06.283625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.284833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.021 [2024-11-21 00:16:06.284923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:16.021 [2024-11-21 00:16:06.284935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:27:16.021 [2024-11-21 00:16:06.284941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.284963] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:16.021 [2024-11-21 00:16:06.284975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:16.021 [2024-11-21 00:16:06.284984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:16.021 [2024-11-21 00:16:06.284991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.284997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:16.021 [2024-11-21 00:16:06.285617] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:16.021 [2024-11-21 00:16:06.285629] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f1f9902c-6dc4-4138-98d5-da0a85f2c424 00:27:16.021 [2024-11-21 00:16:06.285636] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:16.021 [2024-11-21 00:16:06.285647] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 175552 00:27:16.021 [2024-11-21 00:16:06.285653] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 173568 00:27:16.021 [2024-11-21 00:16:06.285660] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0114 00:27:16.021 [2024-11-21 00:16:06.285666] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:16.021 [2024-11-21 00:16:06.285672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:16.021 [2024-11-21 00:16:06.285679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:16.021 [2024-11-21 00:16:06.285684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:16.021 [2024-11-21 00:16:06.285689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:16.021 [2024-11-21 00:16:06.285695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.021 [2024-11-21 00:16:06.285702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:16.021 [2024-11-21 00:16:06.285709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:27:16.021 [2024-11-21 00:16:06.285718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.287535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.021 [2024-11-21 00:16:06.287551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:16.021 [2024-11-21 00:16:06.287559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.805 ms 00:27:16.021 [2024-11-21 00:16:06.287571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.287661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.021 [2024-11-21 00:16:06.287668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:16.021 [2024-11-21 00:16:06.287675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:27:16.021 [2024-11-21 00:16:06.287682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.293183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.021 [2024-11-21 00:16:06.293271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:16.021 [2024-11-21 00:16:06.293324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.021 [2024-11-21 00:16:06.293342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.293393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.021 [2024-11-21 00:16:06.293410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:16.021 [2024-11-21 00:16:06.293432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.021 [2024-11-21 00:16:06.293446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.293490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.021 [2024-11-21 00:16:06.293508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:16.021 [2024-11-21 00:16:06.293561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.021 [2024-11-21 00:16:06.293579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.293602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.021 [2024-11-21 00:16:06.293619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:16.021 [2024-11-21 00:16:06.293634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.021 [2024-11-21 00:16:06.293648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.304523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.021 [2024-11-21 00:16:06.304640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:16.021 [2024-11-21 00:16:06.304676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.021 [2024-11-21 00:16:06.304694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.313150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.021 [2024-11-21 00:16:06.313260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:16.021 [2024-11-21 00:16:06.313308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.021 [2024-11-21 00:16:06.313327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.021 [2024-11-21 00:16:06.313381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.021 [2024-11-21 00:16:06.313398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:16.021 [2024-11-21 00:16:06.313414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.021 [2024-11-21 00:16:06.313429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.022 [2024-11-21 00:16:06.313494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.022 [2024-11-21 00:16:06.313513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:16.022 [2024-11-21 00:16:06.313528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.022 [2024-11-21 00:16:06.313544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.022 [2024-11-21 00:16:06.313614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.022 [2024-11-21 00:16:06.313701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:16.022 [2024-11-21 00:16:06.313717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.022 [2024-11-21 00:16:06.313732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.022 [2024-11-21 00:16:06.313801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.022 [2024-11-21 00:16:06.313821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:16.022 [2024-11-21 00:16:06.313860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.022 [2024-11-21 00:16:06.313878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.022 [2024-11-21 00:16:06.313926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.022 [2024-11-21 00:16:06.314026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:16.022 [2024-11-21 00:16:06.314045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.022 [2024-11-21 00:16:06.314060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.022 [2024-11-21 00:16:06.314116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.022 [2024-11-21 00:16:06.314139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:16.022 [2024-11-21 00:16:06.314155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.022 [2024-11-21 00:16:06.314170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.022 [2024-11-21 00:16:06.314305] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.281 ms, result 0 00:27:16.283 00:27:16.283 00:27:16.283 00:16:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:18.205 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:18.205 00:16:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:18.476 [2024-11-21 00:16:08.653479] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:18.476 [2024-11-21 00:16:08.653675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91948 ] 00:27:18.476 [2024-11-21 00:16:08.785718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.476 [2024-11-21 00:16:08.839942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.735 [2024-11-21 00:16:08.989319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:18.735 [2024-11-21 00:16:08.989684] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:19.006 [2024-11-21 00:16:09.154997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.006 [2024-11-21 00:16:09.155247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:19.006 [2024-11-21 00:16:09.155379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:19.006 [2024-11-21 00:16:09.155409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.006 [2024-11-21 00:16:09.155508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.006 [2024-11-21 00:16:09.155728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:19.006 [2024-11-21 00:16:09.155756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:19.006 [2024-11-21 00:16:09.155790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.006 [2024-11-21 00:16:09.155839] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:19.006 [2024-11-21 00:16:09.156233] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:19.006 [2024-11-21 00:16:09.156386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.006 [2024-11-21 00:16:09.156646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:19.006 [2024-11-21 00:16:09.156672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:27:19.006 [2024-11-21 00:16:09.156683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.006 [2024-11-21 00:16:09.159040] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:19.006 [2024-11-21 00:16:09.163894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.006 [2024-11-21 00:16:09.163947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:19.006 [2024-11-21 00:16:09.163969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.855 ms 00:27:19.006 [2024-11-21 00:16:09.163978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.006 [2024-11-21 00:16:09.164070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.006 [2024-11-21 00:16:09.164087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:19.006 [2024-11-21 00:16:09.164097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:19.006 [2024-11-21 00:16:09.164105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.006 [2024-11-21 00:16:09.175954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.006 [2024-11-21 00:16:09.176004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:19.006 [2024-11-21 00:16:09.176018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.769 ms 00:27:19.006 [2024-11-21 00:16:09.176027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.006 [2024-11-21 00:16:09.176148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.006 [2024-11-21 00:16:09.176165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:19.006 [2024-11-21 00:16:09.176175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:27:19.007 [2024-11-21 00:16:09.176187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.007 [2024-11-21 00:16:09.176255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.007 [2024-11-21 00:16:09.176266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:19.007 [2024-11-21 00:16:09.176275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:19.007 [2024-11-21 00:16:09.176284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.007 [2024-11-21 00:16:09.176338] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:19.007 [2024-11-21 00:16:09.179125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.007 [2024-11-21 00:16:09.179166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:19.007 [2024-11-21 00:16:09.179178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.795 ms 00:27:19.007 [2024-11-21 00:16:09.179186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.007 [2024-11-21 00:16:09.179226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.007 [2024-11-21 00:16:09.179244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:19.007 [2024-11-21 00:16:09.179257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:19.007 [2024-11-21 00:16:09.179269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.007 [2024-11-21 00:16:09.179294] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:19.007 [2024-11-21 00:16:09.179338] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:19.007 [2024-11-21 00:16:09.179387] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:19.007 [2024-11-21 00:16:09.179405] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:19.007 [2024-11-21 00:16:09.179521] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:19.007 [2024-11-21 00:16:09.179535] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:19.007 [2024-11-21 00:16:09.179547] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:19.007 [2024-11-21 00:16:09.179560] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:19.007 [2024-11-21 00:16:09.179574] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:19.007 [2024-11-21 00:16:09.179583] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:19.007 [2024-11-21 00:16:09.179598] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:19.007 [2024-11-21 00:16:09.179606] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:19.007 [2024-11-21 00:16:09.179619] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:19.007 [2024-11-21 00:16:09.179630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.007 [2024-11-21 00:16:09.179639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:19.007 [2024-11-21 00:16:09.179650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:27:19.007 [2024-11-21 00:16:09.179659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.007 [2024-11-21 00:16:09.179744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.007 [2024-11-21 00:16:09.179754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:19.007 [2024-11-21 00:16:09.179765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:19.007 [2024-11-21 00:16:09.179774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.007 [2024-11-21 00:16:09.179876] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:19.007 [2024-11-21 00:16:09.179889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:19.007 [2024-11-21 00:16:09.179899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:19.007 [2024-11-21 00:16:09.179908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.007 [2024-11-21 00:16:09.179918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:19.007 [2024-11-21 00:16:09.179926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:19.007 [2024-11-21 00:16:09.179936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:19.007 [2024-11-21 00:16:09.179947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:19.007 [2024-11-21 00:16:09.179956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:19.007 [2024-11-21 00:16:09.179965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:19.007 [2024-11-21 00:16:09.179978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:19.007 [2024-11-21 00:16:09.179988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:19.007 [2024-11-21 00:16:09.179997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:19.007 [2024-11-21 00:16:09.180006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:19.007 [2024-11-21 00:16:09.180014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:19.007 [2024-11-21 00:16:09.180023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.007 [2024-11-21 00:16:09.180031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:19.007 [2024-11-21 00:16:09.180039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:19.007 [2024-11-21 00:16:09.180053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.007 [2024-11-21 00:16:09.180063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:19.007 [2024-11-21 00:16:09.180072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:19.007 [2024-11-21 00:16:09.180080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:19.007 [2024-11-21 00:16:09.180088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:19.007 [2024-11-21 00:16:09.180096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:19.007 [2024-11-21 00:16:09.180105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:19.007 [2024-11-21 00:16:09.180113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:19.007 [2024-11-21 00:16:09.180129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:19.007 [2024-11-21 00:16:09.180138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:19.007 [2024-11-21 00:16:09.180145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:19.007 [2024-11-21 00:16:09.180152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:19.007 [2024-11-21 00:16:09.180159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:19.007 [2024-11-21 00:16:09.180168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:19.007 [2024-11-21 00:16:09.180176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:19.007 [2024-11-21 00:16:09.180183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:19.007 [2024-11-21 00:16:09.180189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:19.007 [2024-11-21 00:16:09.180196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:19.007 [2024-11-21 00:16:09.180202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:19.007 [2024-11-21 00:16:09.180209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:19.007 [2024-11-21 00:16:09.180216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:19.007 [2024-11-21 00:16:09.180224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.007 [2024-11-21 00:16:09.180232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:19.007 [2024-11-21 00:16:09.180239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:19.007 [2024-11-21 00:16:09.180249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.007 [2024-11-21 00:16:09.180256] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:19.007 [2024-11-21 00:16:09.180265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:19.007 [2024-11-21 00:16:09.180277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:19.007 [2024-11-21 00:16:09.180289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.007 [2024-11-21 00:16:09.180314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:19.007 [2024-11-21 00:16:09.180322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:19.007 [2024-11-21 00:16:09.180329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:19.007 [2024-11-21 00:16:09.180337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:19.007 [2024-11-21 00:16:09.180345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:19.007 [2024-11-21 00:16:09.180353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:19.007 [2024-11-21 00:16:09.180362] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:19.007 [2024-11-21 00:16:09.180373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:19.007 [2024-11-21 00:16:09.180382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:19.007 [2024-11-21 00:16:09.180390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:19.007 [2024-11-21 00:16:09.180397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:19.007 [2024-11-21 00:16:09.180407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:19.007 [2024-11-21 00:16:09.180415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:19.007 [2024-11-21 00:16:09.180424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:19.007 [2024-11-21 00:16:09.180431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:19.007 [2024-11-21 00:16:09.180439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:19.007 [2024-11-21 00:16:09.180446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:19.007 [2024-11-21 00:16:09.180479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:19.007 [2024-11-21 00:16:09.180488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:19.007 [2024-11-21 00:16:09.180496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:19.007 [2024-11-21 00:16:09.180504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:19.007 [2024-11-21 00:16:09.180511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:19.007 [2024-11-21 00:16:09.180519] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:19.007 [2024-11-21 00:16:09.180528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:19.007 [2024-11-21 00:16:09.180539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:19.007 [2024-11-21 00:16:09.180548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:19.007 [2024-11-21 00:16:09.180556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:19.007 [2024-11-21 00:16:09.180567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:19.007 [2024-11-21 00:16:09.180576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.007 [2024-11-21 00:16:09.180591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:19.007 [2024-11-21 00:16:09.180601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:27:19.007 [2024-11-21 00:16:09.180608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.007 [2024-11-21 00:16:09.213648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.007 [2024-11-21 00:16:09.214030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:19.007 [2024-11-21 00:16:09.214198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.961 ms 00:27:19.007 [2024-11-21 00:16:09.214272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.007 [2024-11-21 00:16:09.214602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.007 [2024-11-21 00:16:09.214779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:19.007 [2024-11-21 00:16:09.214845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:27:19.007 [2024-11-21 00:16:09.214898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.007 [2024-11-21 00:16:09.231508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.007 [2024-11-21 00:16:09.231696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:19.007 [2024-11-21 00:16:09.231755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.425 ms 00:27:19.007 [2024-11-21 00:16:09.231778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.007 [2024-11-21 00:16:09.231838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.007 [2024-11-21 00:16:09.231872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:19.007 [2024-11-21 00:16:09.231899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:19.007 [2024-11-21 00:16:09.231923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.007 [2024-11-21 00:16:09.232741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.007 [2024-11-21 00:16:09.232913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:19.007 [2024-11-21 00:16:09.232984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:27:19.007 [2024-11-21 00:16:09.233375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.007 [2024-11-21 00:16:09.233752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.007 [2024-11-21 00:16:09.233866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:19.007 [2024-11-21 00:16:09.233935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:27:19.007 [2024-11-21 00:16:09.233960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.007 [2024-11-21 00:16:09.243898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.007 [2024-11-21 00:16:09.244061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:19.007 [2024-11-21 00:16:09.244140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.894 ms 00:27:19.007 [2024-11-21 00:16:09.244236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.248750] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:19.008 [2024-11-21 00:16:09.248932] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:19.008 [2024-11-21 00:16:09.248998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.008 [2024-11-21 00:16:09.249020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:19.008 [2024-11-21 00:16:09.249041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.582 ms 00:27:19.008 [2024-11-21 00:16:09.249059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.266004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.008 [2024-11-21 00:16:09.266223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:19.008 [2024-11-21 00:16:09.266329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.552 ms 00:27:19.008 [2024-11-21 00:16:09.266356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.269263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.008 [2024-11-21 00:16:09.269458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:19.008 [2024-11-21 00:16:09.269523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.835 ms 00:27:19.008 [2024-11-21 00:16:09.269545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.272380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.008 [2024-11-21 00:16:09.272563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:19.008 [2024-11-21 00:16:09.272623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.786 ms 00:27:19.008 [2024-11-21 00:16:09.272659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.273027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.008 [2024-11-21 00:16:09.273143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:19.008 [2024-11-21 00:16:09.273224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:27:19.008 [2024-11-21 00:16:09.273249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.305190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.008 [2024-11-21 00:16:09.305416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:19.008 [2024-11-21 00:16:09.305438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.900 ms 00:27:19.008 [2024-11-21 00:16:09.305458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.313672] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:19.008 [2024-11-21 00:16:09.317110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.008 [2024-11-21 00:16:09.317162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:19.008 [2024-11-21 00:16:09.317178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.609 ms 00:27:19.008 [2024-11-21 00:16:09.317187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.317269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.008 [2024-11-21 00:16:09.317281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:19.008 [2024-11-21 00:16:09.317291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:19.008 [2024-11-21 00:16:09.317329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.318379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.008 [2024-11-21 00:16:09.318419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:19.008 [2024-11-21 00:16:09.318431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:27:19.008 [2024-11-21 00:16:09.318449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.318500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.008 [2024-11-21 00:16:09.318511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:19.008 [2024-11-21 00:16:09.318521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:19.008 [2024-11-21 00:16:09.318529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.318572] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:19.008 [2024-11-21 00:16:09.318590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.008 [2024-11-21 00:16:09.318602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:19.008 [2024-11-21 00:16:09.318611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:19.008 [2024-11-21 00:16:09.318620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.325030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.008 [2024-11-21 00:16:09.325082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:19.008 [2024-11-21 00:16:09.325095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.386 ms 00:27:19.008 [2024-11-21 00:16:09.325105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.325202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.008 [2024-11-21 00:16:09.325219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:19.008 [2024-11-21 00:16:09.325228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:19.008 [2024-11-21 00:16:09.325239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.008 [2024-11-21 00:16:09.326970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 171.411 ms, result 0 00:27:20.397  [2024-11-21T00:16:11.761Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-21T00:16:12.704Z] Copying: 28/1024 [MB] (15 MBps) [2024-11-21T00:16:13.645Z] Copying: 44/1024 [MB] (16 MBps) [2024-11-21T00:16:14.588Z] Copying: 65/1024 [MB] (20 MBps) [2024-11-21T00:16:15.533Z] Copying: 81/1024 [MB] (15 MBps) [2024-11-21T00:16:16.919Z] Copying: 98/1024 [MB] (16 MBps) [2024-11-21T00:16:17.860Z] Copying: 112/1024 [MB] (14 MBps) [2024-11-21T00:16:18.796Z] Copying: 129/1024 [MB] (16 MBps) [2024-11-21T00:16:19.733Z] Copying: 141/1024 [MB] (11 MBps) [2024-11-21T00:16:20.674Z] Copying: 153/1024 [MB] (12 MBps) [2024-11-21T00:16:21.617Z] Copying: 165/1024 [MB] (11 MBps) [2024-11-21T00:16:22.555Z] Copying: 175/1024 [MB] (10 MBps) [2024-11-21T00:16:23.937Z] Copying: 187/1024 [MB] (11 MBps) [2024-11-21T00:16:24.872Z] Copying: 198/1024 [MB] (11 MBps) [2024-11-21T00:16:25.806Z] Copying: 212/1024 [MB] (13 MBps) [2024-11-21T00:16:26.742Z] Copying: 223/1024 [MB] (11 MBps) [2024-11-21T00:16:27.682Z] Copying: 235/1024 [MB] (11 MBps) [2024-11-21T00:16:28.616Z] Copying: 246/1024 [MB] (11 MBps) [2024-11-21T00:16:29.550Z] Copying: 258/1024 [MB] (11 MBps) [2024-11-21T00:16:30.923Z] Copying: 269/1024 [MB] (11 MBps) [2024-11-21T00:16:31.858Z] Copying: 287/1024 [MB] (17 MBps) [2024-11-21T00:16:32.792Z] Copying: 298/1024 [MB] (11 MBps) [2024-11-21T00:16:33.733Z] Copying: 310/1024 [MB] (11 MBps) [2024-11-21T00:16:34.676Z] Copying: 321/1024 [MB] (11 MBps) [2024-11-21T00:16:35.613Z] Copying: 332/1024 [MB] (10 MBps) [2024-11-21T00:16:36.655Z] Copying: 343/1024 [MB] (11 MBps) [2024-11-21T00:16:37.608Z] Copying: 354/1024 [MB] (11 MBps) [2024-11-21T00:16:38.548Z] Copying: 364/1024 [MB] (10 MBps) [2024-11-21T00:16:39.925Z] Copying: 376/1024 [MB] (11 MBps) [2024-11-21T00:16:40.860Z] Copying: 388/1024 [MB] (11 MBps) [2024-11-21T00:16:41.796Z] Copying: 399/1024 [MB] (10 MBps) [2024-11-21T00:16:42.735Z] Copying: 410/1024 [MB] (11 MBps) [2024-11-21T00:16:43.677Z] Copying: 421/1024 [MB] (10 MBps) [2024-11-21T00:16:44.615Z] Copying: 432/1024 [MB] (10 MBps) [2024-11-21T00:16:45.551Z] Copying: 443/1024 [MB] (11 MBps) [2024-11-21T00:16:46.929Z] Copying: 455/1024 [MB] (11 MBps) [2024-11-21T00:16:47.865Z] Copying: 467/1024 [MB] (11 MBps) [2024-11-21T00:16:48.807Z] Copying: 479/1024 [MB] (11 MBps) [2024-11-21T00:16:49.754Z] Copying: 490/1024 [MB] (11 MBps) [2024-11-21T00:16:50.691Z] Copying: 501/1024 [MB] (11 MBps) [2024-11-21T00:16:51.634Z] Copying: 513/1024 [MB] (11 MBps) [2024-11-21T00:16:52.577Z] Copying: 536348/1048576 [kB] (10208 kBps) [2024-11-21T00:16:53.516Z] Copying: 533/1024 [MB] (10 MBps) [2024-11-21T00:16:54.889Z] Copying: 544/1024 [MB] (10 MBps) [2024-11-21T00:16:55.831Z] Copying: 556/1024 [MB] (11 MBps) [2024-11-21T00:16:56.774Z] Copying: 567/1024 [MB] (11 MBps) [2024-11-21T00:16:57.715Z] Copying: 578/1024 [MB] (10 MBps) [2024-11-21T00:16:58.649Z] Copying: 588/1024 [MB] (10 MBps) [2024-11-21T00:16:59.590Z] Copying: 601/1024 [MB] (12 MBps) [2024-11-21T00:17:00.526Z] Copying: 613/1024 [MB] (12 MBps) [2024-11-21T00:17:01.906Z] Copying: 624/1024 [MB] (11 MBps) [2024-11-21T00:17:02.846Z] Copying: 636/1024 [MB] (11 MBps) [2024-11-21T00:17:03.790Z] Copying: 647/1024 [MB] (11 MBps) [2024-11-21T00:17:04.731Z] Copying: 658/1024 [MB] (10 MBps) [2024-11-21T00:17:05.666Z] Copying: 669/1024 [MB] (11 MBps) [2024-11-21T00:17:06.601Z] Copying: 681/1024 [MB] (11 MBps) [2024-11-21T00:17:07.537Z] Copying: 693/1024 [MB] (11 MBps) [2024-11-21T00:17:08.543Z] Copying: 704/1024 [MB] (11 MBps) [2024-11-21T00:17:09.925Z] Copying: 716/1024 [MB] (11 MBps) [2024-11-21T00:17:10.865Z] Copying: 727/1024 [MB] (10 MBps) [2024-11-21T00:17:11.806Z] Copying: 737/1024 [MB] (10 MBps) [2024-11-21T00:17:12.753Z] Copying: 748/1024 [MB] (11 MBps) [2024-11-21T00:17:13.691Z] Copying: 759/1024 [MB] (10 MBps) [2024-11-21T00:17:14.630Z] Copying: 770/1024 [MB] (11 MBps) [2024-11-21T00:17:15.565Z] Copying: 781/1024 [MB] (10 MBps) [2024-11-21T00:17:16.939Z] Copying: 793/1024 [MB] (11 MBps) [2024-11-21T00:17:17.875Z] Copying: 804/1024 [MB] (11 MBps) [2024-11-21T00:17:18.810Z] Copying: 816/1024 [MB] (11 MBps) [2024-11-21T00:17:19.752Z] Copying: 828/1024 [MB] (11 MBps) [2024-11-21T00:17:20.693Z] Copying: 840/1024 [MB] (11 MBps) [2024-11-21T00:17:21.632Z] Copying: 850/1024 [MB] (10 MBps) [2024-11-21T00:17:22.575Z] Copying: 861/1024 [MB] (10 MBps) [2024-11-21T00:17:23.514Z] Copying: 872/1024 [MB] (11 MBps) [2024-11-21T00:17:24.903Z] Copying: 884/1024 [MB] (11 MBps) [2024-11-21T00:17:25.839Z] Copying: 894/1024 [MB] (10 MBps) [2024-11-21T00:17:26.779Z] Copying: 905/1024 [MB] (10 MBps) [2024-11-21T00:17:27.720Z] Copying: 917/1024 [MB] (11 MBps) [2024-11-21T00:17:28.658Z] Copying: 928/1024 [MB] (10 MBps) [2024-11-21T00:17:29.597Z] Copying: 939/1024 [MB] (11 MBps) [2024-11-21T00:17:30.539Z] Copying: 950/1024 [MB] (11 MBps) [2024-11-21T00:17:31.919Z] Copying: 961/1024 [MB] (11 MBps) [2024-11-21T00:17:32.857Z] Copying: 972/1024 [MB] (10 MBps) [2024-11-21T00:17:33.790Z] Copying: 983/1024 [MB] (10 MBps) [2024-11-21T00:17:34.725Z] Copying: 994/1024 [MB] (11 MBps) [2024-11-21T00:17:35.661Z] Copying: 1006/1024 [MB] (11 MBps) [2024-11-21T00:17:36.229Z] Copying: 1018/1024 [MB] (11 MBps) [2024-11-21T00:17:36.490Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-21 00:17:36.325986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.069 [2024-11-21 00:17:36.326055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:46.069 [2024-11-21 00:17:36.326069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:46.069 [2024-11-21 00:17:36.326076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.069 [2024-11-21 00:17:36.326100] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:46.069 [2024-11-21 00:17:36.326744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.069 [2024-11-21 00:17:36.326765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:46.069 [2024-11-21 00:17:36.326773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:28:46.069 [2024-11-21 00:17:36.326779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.069 [2024-11-21 00:17:36.326965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.069 [2024-11-21 00:17:36.327023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:46.069 [2024-11-21 00:17:36.327033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:28:46.069 [2024-11-21 00:17:36.327040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.069 [2024-11-21 00:17:36.329673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.069 [2024-11-21 00:17:36.329697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:46.069 [2024-11-21 00:17:36.329705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.621 ms 00:28:46.069 [2024-11-21 00:17:36.329712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.069 [2024-11-21 00:17:36.335371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.069 [2024-11-21 00:17:36.335402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:46.069 [2024-11-21 00:17:36.335412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.645 ms 00:28:46.069 [2024-11-21 00:17:36.335420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.069 [2024-11-21 00:17:36.337740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.069 [2024-11-21 00:17:36.337894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:46.069 [2024-11-21 00:17:36.337910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.277 ms 00:28:46.069 [2024-11-21 00:17:36.337918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.069 [2024-11-21 00:17:36.341859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.069 [2024-11-21 00:17:36.341893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:46.069 [2024-11-21 00:17:36.341904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.909 ms 00:28:46.069 [2024-11-21 00:17:36.341911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.069 [2024-11-21 00:17:36.345873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.069 [2024-11-21 00:17:36.345899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:46.069 [2024-11-21 00:17:36.345907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.928 ms 00:28:46.069 [2024-11-21 00:17:36.345920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.069 [2024-11-21 00:17:36.348414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.069 [2024-11-21 00:17:36.348515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:46.069 [2024-11-21 00:17:36.348527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:28:46.069 [2024-11-21 00:17:36.348534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.069 [2024-11-21 00:17:36.350247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.069 [2024-11-21 00:17:36.350272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:46.069 [2024-11-21 00:17:36.350279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.689 ms 00:28:46.069 [2024-11-21 00:17:36.350285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.069 [2024-11-21 00:17:36.351797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.069 [2024-11-21 00:17:36.351923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:46.069 [2024-11-21 00:17:36.351937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.478 ms 00:28:46.069 [2024-11-21 00:17:36.351944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.069 [2024-11-21 00:17:36.353243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.069 [2024-11-21 00:17:36.353271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:46.069 [2024-11-21 00:17:36.353278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.249 ms 00:28:46.069 [2024-11-21 00:17:36.353283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.069 [2024-11-21 00:17:36.353318] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:46.069 [2024-11-21 00:17:36.353339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:46.069 [2024-11-21 00:17:36.353348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:46.069 [2024-11-21 00:17:36.353355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:46.069 [2024-11-21 00:17:36.353361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:46.069 [2024-11-21 00:17:36.353368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:46.069 [2024-11-21 00:17:36.353374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:46.069 [2024-11-21 00:17:36.353379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:46.069 [2024-11-21 00:17:36.353385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:46.069 [2024-11-21 00:17:36.353391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:46.069 [2024-11-21 00:17:36.353397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:46.069 [2024-11-21 00:17:36.353403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:46.069 [2024-11-21 00:17:36.353409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:46.069 [2024-11-21 00:17:36.353416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:46.070 [2024-11-21 00:17:36.353957] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:46.070 [2024-11-21 00:17:36.353964] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f1f9902c-6dc4-4138-98d5-da0a85f2c424 00:28:46.071 [2024-11-21 00:17:36.353970] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:46.071 [2024-11-21 00:17:36.353979] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:46.071 [2024-11-21 00:17:36.353985] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:46.071 [2024-11-21 00:17:36.353992] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:46.071 [2024-11-21 00:17:36.353999] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:46.071 [2024-11-21 00:17:36.354005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:46.071 [2024-11-21 00:17:36.354011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:46.071 [2024-11-21 00:17:36.354016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:46.071 [2024-11-21 00:17:36.354020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:46.071 [2024-11-21 00:17:36.354026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.071 [2024-11-21 00:17:36.354032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:46.071 [2024-11-21 00:17:36.354043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:28:46.071 [2024-11-21 00:17:36.354052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.355787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.071 [2024-11-21 00:17:36.355807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:46.071 [2024-11-21 00:17:36.355814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:28:46.071 [2024-11-21 00:17:36.355820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.355911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.071 [2024-11-21 00:17:36.355919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:46.071 [2024-11-21 00:17:36.355927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:46.071 [2024-11-21 00:17:36.355933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.361126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.071 [2024-11-21 00:17:36.361153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:46.071 [2024-11-21 00:17:36.361161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.071 [2024-11-21 00:17:36.361167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.361214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.071 [2024-11-21 00:17:36.361221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:46.071 [2024-11-21 00:17:36.361231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.071 [2024-11-21 00:17:36.361237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.361270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.071 [2024-11-21 00:17:36.361277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:46.071 [2024-11-21 00:17:36.361284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.071 [2024-11-21 00:17:36.361291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.361316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.071 [2024-11-21 00:17:36.361326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:46.071 [2024-11-21 00:17:36.361335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.071 [2024-11-21 00:17:36.361344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.372063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.071 [2024-11-21 00:17:36.372104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:46.071 [2024-11-21 00:17:36.372112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.071 [2024-11-21 00:17:36.372119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.380621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.071 [2024-11-21 00:17:36.380662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:46.071 [2024-11-21 00:17:36.380671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.071 [2024-11-21 00:17:36.380677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.380721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.071 [2024-11-21 00:17:36.380729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:46.071 [2024-11-21 00:17:36.380736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.071 [2024-11-21 00:17:36.380742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.380762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.071 [2024-11-21 00:17:36.380774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:46.071 [2024-11-21 00:17:36.380783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.071 [2024-11-21 00:17:36.380789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.380845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.071 [2024-11-21 00:17:36.380854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:46.071 [2024-11-21 00:17:36.380860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.071 [2024-11-21 00:17:36.380866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.380890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.071 [2024-11-21 00:17:36.380898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:46.071 [2024-11-21 00:17:36.380904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.071 [2024-11-21 00:17:36.380912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.380949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.071 [2024-11-21 00:17:36.380957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:46.071 [2024-11-21 00:17:36.380963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.071 [2024-11-21 00:17:36.380970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.381010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.071 [2024-11-21 00:17:36.381018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:46.071 [2024-11-21 00:17:36.381027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.071 [2024-11-21 00:17:36.381034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.071 [2024-11-21 00:17:36.381145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.133 ms, result 0 00:28:46.332 00:28:46.332 00:28:46.332 00:17:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:48.968 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:48.968 00:17:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:48.968 00:17:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:48.968 00:17:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:48.968 00:17:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:48.968 00:17:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:48.968 00:17:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:48.968 00:17:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:48.968 Process with pid 89807 is not found 00:28:48.968 00:17:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89807 00:28:48.968 00:17:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89807 ']' 00:28:48.968 00:17:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89807 00:28:48.968 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89807) - No such process 00:28:48.968 00:17:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89807 is not found' 00:28:48.968 00:17:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:48.968 Remove shared memory files 00:28:48.968 00:17:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:48.968 00:17:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:48.968 00:17:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:48.968 00:17:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:48.968 00:17:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:48.968 00:17:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:48.968 00:17:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:48.968 ************************************ 00:28:48.968 END TEST ftl_dirty_shutdown 00:28:48.968 ************************************ 00:28:48.968 00:28:48.968 real 4m52.821s 00:28:48.968 user 5m6.570s 00:28:48.968 sys 0m22.952s 00:28:48.968 00:17:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:48.968 00:17:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:49.231 00:17:39 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:49.231 00:17:39 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:49.231 00:17:39 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:49.231 00:17:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:49.231 ************************************ 00:28:49.231 START TEST ftl_upgrade_shutdown 00:28:49.231 ************************************ 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:49.231 * Looking for test storage... 00:28:49.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:49.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.231 --rc genhtml_branch_coverage=1 00:28:49.231 --rc genhtml_function_coverage=1 00:28:49.231 --rc genhtml_legend=1 00:28:49.231 --rc geninfo_all_blocks=1 00:28:49.231 --rc geninfo_unexecuted_blocks=1 00:28:49.231 00:28:49.231 ' 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:49.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.231 --rc genhtml_branch_coverage=1 00:28:49.231 --rc genhtml_function_coverage=1 00:28:49.231 --rc genhtml_legend=1 00:28:49.231 --rc geninfo_all_blocks=1 00:28:49.231 --rc geninfo_unexecuted_blocks=1 00:28:49.231 00:28:49.231 ' 00:28:49.231 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:49.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.232 --rc genhtml_branch_coverage=1 00:28:49.232 --rc genhtml_function_coverage=1 00:28:49.232 --rc genhtml_legend=1 00:28:49.232 --rc geninfo_all_blocks=1 00:28:49.232 --rc geninfo_unexecuted_blocks=1 00:28:49.232 00:28:49.232 ' 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:49.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.232 --rc genhtml_branch_coverage=1 00:28:49.232 --rc genhtml_function_coverage=1 00:28:49.232 --rc genhtml_legend=1 00:28:49.232 --rc geninfo_all_blocks=1 00:28:49.232 --rc geninfo_unexecuted_blocks=1 00:28:49.232 00:28:49.232 ' 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92941 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92941 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92941 ']' 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:49.232 00:17:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:49.491 [2024-11-21 00:17:39.665694] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:49.491 [2024-11-21 00:17:39.665955] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92941 ] 00:28:49.491 [2024-11-21 00:17:39.798961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.491 [2024-11-21 00:17:39.840182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:50.426 00:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:50.685 00:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:50.685 { 00:28:50.685 "name": "basen1", 00:28:50.685 "aliases": [ 00:28:50.685 "afc607ed-1d0d-4358-a020-df95526fc7e9" 00:28:50.685 ], 00:28:50.685 "product_name": "NVMe disk", 00:28:50.685 "block_size": 4096, 00:28:50.685 "num_blocks": 1310720, 00:28:50.685 "uuid": "afc607ed-1d0d-4358-a020-df95526fc7e9", 00:28:50.685 "numa_id": -1, 00:28:50.685 "assigned_rate_limits": { 00:28:50.685 "rw_ios_per_sec": 0, 00:28:50.685 "rw_mbytes_per_sec": 0, 00:28:50.685 "r_mbytes_per_sec": 0, 00:28:50.685 "w_mbytes_per_sec": 0 00:28:50.685 }, 00:28:50.685 "claimed": true, 00:28:50.685 "claim_type": "read_many_write_one", 00:28:50.685 "zoned": false, 00:28:50.685 "supported_io_types": { 00:28:50.685 "read": true, 00:28:50.685 "write": true, 00:28:50.685 "unmap": true, 00:28:50.685 "flush": true, 00:28:50.685 "reset": true, 00:28:50.685 "nvme_admin": true, 00:28:50.685 "nvme_io": true, 00:28:50.685 "nvme_io_md": false, 00:28:50.685 "write_zeroes": true, 00:28:50.685 "zcopy": false, 00:28:50.685 "get_zone_info": false, 00:28:50.685 "zone_management": false, 00:28:50.685 "zone_append": false, 00:28:50.685 "compare": true, 00:28:50.685 "compare_and_write": false, 00:28:50.685 "abort": true, 00:28:50.685 "seek_hole": false, 00:28:50.685 "seek_data": false, 00:28:50.685 "copy": true, 00:28:50.685 "nvme_iov_md": false 00:28:50.685 }, 00:28:50.685 "driver_specific": { 00:28:50.685 "nvme": [ 00:28:50.685 { 00:28:50.685 "pci_address": "0000:00:11.0", 00:28:50.685 "trid": { 00:28:50.685 "trtype": "PCIe", 00:28:50.685 "traddr": "0000:00:11.0" 00:28:50.685 }, 00:28:50.685 "ctrlr_data": { 00:28:50.685 "cntlid": 0, 00:28:50.685 "vendor_id": "0x1b36", 00:28:50.685 "model_number": "QEMU NVMe Ctrl", 00:28:50.685 "serial_number": "12341", 00:28:50.685 "firmware_revision": "8.0.0", 00:28:50.685 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:50.685 "oacs": { 00:28:50.685 "security": 0, 00:28:50.685 "format": 1, 00:28:50.685 "firmware": 0, 00:28:50.685 "ns_manage": 1 00:28:50.685 }, 00:28:50.685 "multi_ctrlr": false, 00:28:50.685 "ana_reporting": false 00:28:50.685 }, 00:28:50.685 "vs": { 00:28:50.685 "nvme_version": "1.4" 00:28:50.685 }, 00:28:50.685 "ns_data": { 00:28:50.685 "id": 1, 00:28:50.685 "can_share": false 00:28:50.685 } 00:28:50.685 } 00:28:50.685 ], 00:28:50.685 "mp_policy": "active_passive" 00:28:50.685 } 00:28:50.685 } 00:28:50.685 ]' 00:28:50.685 00:17:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:50.685 00:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:50.685 00:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:50.685 00:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:50.685 00:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:50.685 00:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:28:50.685 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:50.685 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:50.685 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:50.685 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:50.685 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:50.944 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=10312962-73b9-47dd-ace5-d92fdc507261 00:28:50.944 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:50.944 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 10312962-73b9-47dd-ace5-d92fdc507261 00:28:51.202 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=f6e33cdf-18ad-44d4-94b5-72a061d09abd 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u f6e33cdf-18ad-44d4-94b5-72a061d09abd 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=6e5b1a64-4511-4cf6-b441-9b46284a43d9 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 6e5b1a64-4511-4cf6-b441-9b46284a43d9 ]] 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 6e5b1a64-4511-4cf6-b441-9b46284a43d9 5120 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=6e5b1a64-4511-4cf6-b441-9b46284a43d9 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6e5b1a64-4511-4cf6-b441-9b46284a43d9 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=6e5b1a64-4511-4cf6-b441-9b46284a43d9 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:51.461 00:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e5b1a64-4511-4cf6-b441-9b46284a43d9 00:28:51.719 00:17:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:51.719 { 00:28:51.719 "name": "6e5b1a64-4511-4cf6-b441-9b46284a43d9", 00:28:51.719 "aliases": [ 00:28:51.719 "lvs/basen1p0" 00:28:51.719 ], 00:28:51.719 "product_name": "Logical Volume", 00:28:51.719 "block_size": 4096, 00:28:51.719 "num_blocks": 5242880, 00:28:51.719 "uuid": "6e5b1a64-4511-4cf6-b441-9b46284a43d9", 00:28:51.719 "assigned_rate_limits": { 00:28:51.719 "rw_ios_per_sec": 0, 00:28:51.719 "rw_mbytes_per_sec": 0, 00:28:51.719 "r_mbytes_per_sec": 0, 00:28:51.719 "w_mbytes_per_sec": 0 00:28:51.719 }, 00:28:51.719 "claimed": false, 00:28:51.719 "zoned": false, 00:28:51.719 "supported_io_types": { 00:28:51.719 "read": true, 00:28:51.719 "write": true, 00:28:51.719 "unmap": true, 00:28:51.719 "flush": false, 00:28:51.719 "reset": true, 00:28:51.719 "nvme_admin": false, 00:28:51.719 "nvme_io": false, 00:28:51.719 "nvme_io_md": false, 00:28:51.719 "write_zeroes": true, 00:28:51.719 "zcopy": false, 00:28:51.719 "get_zone_info": false, 00:28:51.719 "zone_management": false, 00:28:51.719 "zone_append": false, 00:28:51.719 "compare": false, 00:28:51.719 "compare_and_write": false, 00:28:51.719 "abort": false, 00:28:51.719 "seek_hole": true, 00:28:51.719 "seek_data": true, 00:28:51.719 "copy": false, 00:28:51.719 "nvme_iov_md": false 00:28:51.719 }, 00:28:51.719 "driver_specific": { 00:28:51.719 "lvol": { 00:28:51.719 "lvol_store_uuid": "f6e33cdf-18ad-44d4-94b5-72a061d09abd", 00:28:51.719 "base_bdev": "basen1", 00:28:51.719 "thin_provision": true, 00:28:51.719 "num_allocated_clusters": 0, 00:28:51.719 "snapshot": false, 00:28:51.719 "clone": false, 00:28:51.719 "esnap_clone": false 00:28:51.719 } 00:28:51.719 } 00:28:51.719 } 00:28:51.719 ]' 00:28:51.719 00:17:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:51.719 00:17:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:51.719 00:17:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:51.719 00:17:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:28:51.719 00:17:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:28:51.719 00:17:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:28:51.719 00:17:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:51.720 00:17:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:51.720 00:17:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:51.978 00:17:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:51.978 00:17:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:51.978 00:17:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:52.237 00:17:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:52.237 00:17:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:52.237 00:17:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 6e5b1a64-4511-4cf6-b441-9b46284a43d9 -c cachen1p0 --l2p_dram_limit 2 00:28:52.497 [2024-11-21 00:17:42.768075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.497 [2024-11-21 00:17:42.768123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:52.497 [2024-11-21 00:17:42.768138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:52.497 [2024-11-21 00:17:42.768149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.497 [2024-11-21 00:17:42.768186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.497 [2024-11-21 00:17:42.768195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:52.497 [2024-11-21 00:17:42.768202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:52.497 [2024-11-21 00:17:42.768212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.497 [2024-11-21 00:17:42.768230] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:52.497 [2024-11-21 00:17:42.768431] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:52.497 [2024-11-21 00:17:42.768444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.497 [2024-11-21 00:17:42.768452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:52.497 [2024-11-21 00:17:42.768461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.218 ms 00:28:52.497 [2024-11-21 00:17:42.768470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.497 [2024-11-21 00:17:42.768494] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 69dda36b-7eda-4c64-a11f-835234a8bb0d 00:28:52.497 [2024-11-21 00:17:42.769759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.497 [2024-11-21 00:17:42.769782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:52.497 [2024-11-21 00:17:42.769792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:52.497 [2024-11-21 00:17:42.769798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.497 [2024-11-21 00:17:42.776715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.497 [2024-11-21 00:17:42.776741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:52.497 [2024-11-21 00:17:42.776751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.870 ms 00:28:52.497 [2024-11-21 00:17:42.776758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.497 [2024-11-21 00:17:42.776826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.497 [2024-11-21 00:17:42.776835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:52.497 [2024-11-21 00:17:42.776843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:52.497 [2024-11-21 00:17:42.776857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.497 [2024-11-21 00:17:42.776892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.497 [2024-11-21 00:17:42.776902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:52.497 [2024-11-21 00:17:42.776911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:52.497 [2024-11-21 00:17:42.776917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.497 [2024-11-21 00:17:42.776935] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:52.497 [2024-11-21 00:17:42.778597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.497 [2024-11-21 00:17:42.778622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:52.497 [2024-11-21 00:17:42.778632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.667 ms 00:28:52.497 [2024-11-21 00:17:42.778639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.497 [2024-11-21 00:17:42.778660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.497 [2024-11-21 00:17:42.778668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:52.497 [2024-11-21 00:17:42.778675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:52.497 [2024-11-21 00:17:42.778684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.497 [2024-11-21 00:17:42.778697] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:52.497 [2024-11-21 00:17:42.778814] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:52.497 [2024-11-21 00:17:42.778824] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:52.497 [2024-11-21 00:17:42.778835] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:52.497 [2024-11-21 00:17:42.778843] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:52.497 [2024-11-21 00:17:42.778851] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:52.497 [2024-11-21 00:17:42.778857] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:52.497 [2024-11-21 00:17:42.778869] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:52.497 [2024-11-21 00:17:42.778875] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:52.497 [2024-11-21 00:17:42.778882] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:52.497 [2024-11-21 00:17:42.778889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.497 [2024-11-21 00:17:42.778897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:52.497 [2024-11-21 00:17:42.778907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:28:52.497 [2024-11-21 00:17:42.778915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.497 [2024-11-21 00:17:42.778979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.497 [2024-11-21 00:17:42.778989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:52.497 [2024-11-21 00:17:42.778997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:28:52.497 [2024-11-21 00:17:42.779008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.497 [2024-11-21 00:17:42.779082] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:52.497 [2024-11-21 00:17:42.779095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:52.497 [2024-11-21 00:17:42.779104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:52.497 [2024-11-21 00:17:42.779111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.497 [2024-11-21 00:17:42.779118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:52.497 [2024-11-21 00:17:42.779127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:52.497 [2024-11-21 00:17:42.779132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:52.497 [2024-11-21 00:17:42.779139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:52.497 [2024-11-21 00:17:42.779144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:52.497 [2024-11-21 00:17:42.779151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.497 [2024-11-21 00:17:42.779157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:52.497 [2024-11-21 00:17:42.779163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:52.497 [2024-11-21 00:17:42.779168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.497 [2024-11-21 00:17:42.779176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:52.497 [2024-11-21 00:17:42.779181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:52.497 [2024-11-21 00:17:42.779188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.497 [2024-11-21 00:17:42.779193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:52.497 [2024-11-21 00:17:42.779201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:52.497 [2024-11-21 00:17:42.779206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.497 [2024-11-21 00:17:42.779215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:52.497 [2024-11-21 00:17:42.779220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:52.497 [2024-11-21 00:17:42.779227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:52.497 [2024-11-21 00:17:42.779232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:52.497 [2024-11-21 00:17:42.779239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:52.497 [2024-11-21 00:17:42.779245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:52.497 [2024-11-21 00:17:42.779252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:52.497 [2024-11-21 00:17:42.779258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:52.497 [2024-11-21 00:17:42.779265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:52.497 [2024-11-21 00:17:42.779272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:52.498 [2024-11-21 00:17:42.779281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:52.498 [2024-11-21 00:17:42.779287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:52.498 [2024-11-21 00:17:42.779307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:52.498 [2024-11-21 00:17:42.779314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:52.498 [2024-11-21 00:17:42.779322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.498 [2024-11-21 00:17:42.779328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:52.498 [2024-11-21 00:17:42.779337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:52.498 [2024-11-21 00:17:42.779343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.498 [2024-11-21 00:17:42.779350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:52.498 [2024-11-21 00:17:42.779356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:52.498 [2024-11-21 00:17:42.779364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.498 [2024-11-21 00:17:42.779370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:52.498 [2024-11-21 00:17:42.779378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:52.498 [2024-11-21 00:17:42.779384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.498 [2024-11-21 00:17:42.779392] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:52.498 [2024-11-21 00:17:42.779398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:52.498 [2024-11-21 00:17:42.779408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:52.498 [2024-11-21 00:17:42.779414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:52.498 [2024-11-21 00:17:42.779423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:52.498 [2024-11-21 00:17:42.779434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:52.498 [2024-11-21 00:17:42.779441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:52.498 [2024-11-21 00:17:42.779448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:52.498 [2024-11-21 00:17:42.779456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:52.498 [2024-11-21 00:17:42.779462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:52.498 [2024-11-21 00:17:42.779473] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:52.498 [2024-11-21 00:17:42.779487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:52.498 [2024-11-21 00:17:42.779495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:52.498 [2024-11-21 00:17:42.779502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:52.498 [2024-11-21 00:17:42.779510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:52.498 [2024-11-21 00:17:42.779517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:52.498 [2024-11-21 00:17:42.779525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:52.498 [2024-11-21 00:17:42.779531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:52.498 [2024-11-21 00:17:42.779540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:52.498 [2024-11-21 00:17:42.779547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:52.498 [2024-11-21 00:17:42.779555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:52.498 [2024-11-21 00:17:42.779561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:52.498 [2024-11-21 00:17:42.779569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:52.498 [2024-11-21 00:17:42.779575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:52.498 [2024-11-21 00:17:42.779583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:52.498 [2024-11-21 00:17:42.779589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:52.498 [2024-11-21 00:17:42.779598] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:52.498 [2024-11-21 00:17:42.779607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:52.498 [2024-11-21 00:17:42.779615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:52.498 [2024-11-21 00:17:42.779621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:52.498 [2024-11-21 00:17:42.779629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:52.498 [2024-11-21 00:17:42.779634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:52.498 [2024-11-21 00:17:42.779641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.498 [2024-11-21 00:17:42.779648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:52.498 [2024-11-21 00:17:42.779659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.608 ms 00:28:52.498 [2024-11-21 00:17:42.779664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.498 [2024-11-21 00:17:42.779695] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:52.498 [2024-11-21 00:17:42.779705] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:56.701 [2024-11-21 00:17:46.283399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.701 [2024-11-21 00:17:46.283440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:56.701 [2024-11-21 00:17:46.283460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3503.692 ms 00:28:56.701 [2024-11-21 00:17:46.283467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.701 [2024-11-21 00:17:46.293331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.701 [2024-11-21 00:17:46.293477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:56.701 [2024-11-21 00:17:46.293496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.788 ms 00:28:56.701 [2024-11-21 00:17:46.293503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.701 [2024-11-21 00:17:46.293549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.701 [2024-11-21 00:17:46.293557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:56.701 [2024-11-21 00:17:46.293568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:56.701 [2024-11-21 00:17:46.293579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.701 [2024-11-21 00:17:46.302670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.701 [2024-11-21 00:17:46.302698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:56.701 [2024-11-21 00:17:46.302708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.054 ms 00:28:56.701 [2024-11-21 00:17:46.302715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.701 [2024-11-21 00:17:46.302745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.701 [2024-11-21 00:17:46.302752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:56.701 [2024-11-21 00:17:46.302763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:56.701 [2024-11-21 00:17:46.302769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.701 [2024-11-21 00:17:46.303149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.701 [2024-11-21 00:17:46.303163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:56.701 [2024-11-21 00:17:46.303172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.352 ms 00:28:56.701 [2024-11-21 00:17:46.303178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.701 [2024-11-21 00:17:46.303213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.701 [2024-11-21 00:17:46.303220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:56.701 [2024-11-21 00:17:46.303228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:56.701 [2024-11-21 00:17:46.303237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.701 [2024-11-21 00:17:46.323717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.702 [2024-11-21 00:17:46.323803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:56.702 [2024-11-21 00:17:46.323841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.451 ms 00:28:56.702 [2024-11-21 00:17:46.323863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.702 [2024-11-21 00:17:46.332069] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:56.702 [2024-11-21 00:17:46.333014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.702 [2024-11-21 00:17:46.333039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:56.702 [2024-11-21 00:17:46.333047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.928 ms 00:28:56.702 [2024-11-21 00:17:46.333054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.702 [2024-11-21 00:17:46.348510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.702 [2024-11-21 00:17:46.348651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:56.702 [2024-11-21 00:17:46.348664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.436 ms 00:28:56.702 [2024-11-21 00:17:46.348676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.702 [2024-11-21 00:17:46.348745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.702 [2024-11-21 00:17:46.348756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:56.702 [2024-11-21 00:17:46.348763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:56.702 [2024-11-21 00:17:46.348771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.702 [2024-11-21 00:17:46.351600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.702 [2024-11-21 00:17:46.351665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:56.702 [2024-11-21 00:17:46.351674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.814 ms 00:28:56.702 [2024-11-21 00:17:46.351682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.702 [2024-11-21 00:17:46.354754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.702 [2024-11-21 00:17:46.354782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:56.702 [2024-11-21 00:17:46.354790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.039 ms 00:28:56.702 [2024-11-21 00:17:46.354797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.702 [2024-11-21 00:17:46.355031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.702 [2024-11-21 00:17:46.355041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:56.702 [2024-11-21 00:17:46.355048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:28:56.702 [2024-11-21 00:17:46.355056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.702 [2024-11-21 00:17:46.390131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.702 [2024-11-21 00:17:46.390276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:56.702 [2024-11-21 00:17:46.390610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.049 ms 00:28:56.702 [2024-11-21 00:17:46.390661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.702 [2024-11-21 00:17:46.395791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.702 [2024-11-21 00:17:46.395913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:56.702 [2024-11-21 00:17:46.395966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.018 ms 00:28:56.702 [2024-11-21 00:17:46.395980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.702 [2024-11-21 00:17:46.399843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.702 [2024-11-21 00:17:46.399878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:56.702 [2024-11-21 00:17:46.399887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.831 ms 00:28:56.702 [2024-11-21 00:17:46.399897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.702 [2024-11-21 00:17:46.403985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.702 [2024-11-21 00:17:46.404023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:56.702 [2024-11-21 00:17:46.404033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.056 ms 00:28:56.702 [2024-11-21 00:17:46.404045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.702 [2024-11-21 00:17:46.404089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.702 [2024-11-21 00:17:46.404100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:56.702 [2024-11-21 00:17:46.404109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:56.702 [2024-11-21 00:17:46.404119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.702 [2024-11-21 00:17:46.404185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:56.702 [2024-11-21 00:17:46.404197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:56.702 [2024-11-21 00:17:46.404205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:56.702 [2024-11-21 00:17:46.404214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:56.702 [2024-11-21 00:17:46.405617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3637.049 ms, result 0 00:28:56.702 { 00:28:56.702 "name": "ftl", 00:28:56.702 "uuid": "69dda36b-7eda-4c64-a11f-835234a8bb0d" 00:28:56.702 } 00:28:56.702 00:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:56.702 [2024-11-21 00:17:46.614187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.702 00:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:56.702 00:17:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:56.702 [2024-11-21 00:17:47.038592] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:56.702 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:56.962 [2024-11-21 00:17:47.243094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:56.962 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:57.219 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:57.219 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:57.219 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:57.219 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:57.219 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:57.219 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:57.219 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:57.219 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:57.219 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:57.219 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:57.219 Fill FTL, iteration 1 00:28:57.219 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:57.219 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:57.219 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:57.220 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:57.220 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:57.220 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:57.220 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=93061 00:28:57.220 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:57.220 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:57.220 00:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 93061 /var/tmp/spdk.tgt.sock 00:28:57.220 00:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93061 ']' 00:28:57.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:57.220 00:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:57.220 00:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:57.220 00:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:57.220 00:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:57.220 00:17:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:57.478 [2024-11-21 00:17:47.671443] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:57.478 [2024-11-21 00:17:47.671559] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93061 ] 00:28:57.478 [2024-11-21 00:17:47.803039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.478 [2024-11-21 00:17:47.835595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.412 00:17:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:58.412 00:17:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:58.412 00:17:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:58.412 ftln1 00:28:58.412 00:17:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:58.412 00:17:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:58.671 00:17:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:58.671 00:17:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 93061 00:28:58.671 00:17:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 93061 ']' 00:28:58.671 00:17:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 93061 00:28:58.671 00:17:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:58.671 00:17:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:58.671 00:17:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93061 00:28:58.671 killing process with pid 93061 00:28:58.671 00:17:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:58.671 00:17:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:58.671 00:17:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93061' 00:28:58.671 00:17:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 93061 00:28:58.671 00:17:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 93061 00:28:58.931 00:17:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:58.931 00:17:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:58.931 [2024-11-21 00:17:49.338370] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:58.931 [2024-11-21 00:17:49.338825] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93095 ] 00:28:59.189 [2024-11-21 00:17:49.476100] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.189 [2024-11-21 00:17:49.508458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.568  [2024-11-21T00:17:51.927Z] Copying: 174/1024 [MB] (174 MBps) [2024-11-21T00:17:52.866Z] Copying: 345/1024 [MB] (171 MBps) [2024-11-21T00:17:53.803Z] Copying: 579/1024 [MB] (234 MBps) [2024-11-21T00:17:54.743Z] Copying: 835/1024 [MB] (256 MBps) [2024-11-21T00:17:54.743Z] Copying: 1024/1024 [MB] (average 215 MBps) 00:29:04.322 00:29:04.323 Calculate MD5 checksum, iteration 1 00:29:04.323 00:17:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:04.323 00:17:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:04.323 00:17:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:04.323 00:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:04.323 00:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:04.323 00:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:04.323 00:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:04.323 00:17:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:04.323 [2024-11-21 00:17:54.645426] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:04.323 [2024-11-21 00:17:54.645899] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93148 ] 00:29:04.584 [2024-11-21 00:17:54.781912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.584 [2024-11-21 00:17:54.811163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.965  [2024-11-21T00:17:56.644Z] Copying: 665/1024 [MB] (665 MBps) [2024-11-21T00:17:56.901Z] Copying: 1024/1024 [MB] (average 652 MBps) 00:29:06.481 00:29:06.481 00:17:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:06.481 00:17:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:08.381 00:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:08.381 00:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=488277fe57f6d8b86f20376f504256ba 00:29:08.381 00:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:08.381 00:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:08.381 00:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:08.381 Fill FTL, iteration 2 00:29:08.381 00:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:08.381 00:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:08.381 00:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:08.381 00:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:08.381 00:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:08.381 00:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:08.639 [2024-11-21 00:17:58.850140] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:08.639 [2024-11-21 00:17:58.850387] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93199 ] 00:29:08.639 [2024-11-21 00:17:58.985913] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.639 [2024-11-21 00:17:59.013926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.018  [2024-11-21T00:18:01.377Z] Copying: 259/1024 [MB] (259 MBps) [2024-11-21T00:18:02.318Z] Copying: 521/1024 [MB] (262 MBps) [2024-11-21T00:18:03.258Z] Copying: 776/1024 [MB] (255 MBps) [2024-11-21T00:18:03.520Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:29:13.099 00:29:13.099 00:18:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:13.099 00:18:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:13.099 Calculate MD5 checksum, iteration 2 00:29:13.099 00:18:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:13.099 00:18:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:13.099 00:18:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:13.099 00:18:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:13.099 00:18:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:13.099 00:18:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:13.099 [2024-11-21 00:18:03.361950] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:13.099 [2024-11-21 00:18:03.362067] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93252 ] 00:29:13.099 [2024-11-21 00:18:03.498244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.360 [2024-11-21 00:18:03.539568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.744  [2024-11-21T00:18:05.736Z] Copying: 661/1024 [MB] (661 MBps) [2024-11-21T00:18:05.997Z] Copying: 1024/1024 [MB] (average 651 MBps) 00:29:15.577 00:29:15.577 00:18:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:15.577 00:18:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:18.106 00:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:18.106 00:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d26f55796c02748cbfd4e1eff565f419 00:29:18.106 00:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:18.106 00:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:18.106 00:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:18.106 [2024-11-21 00:18:08.282326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.106 [2024-11-21 00:18:08.282376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:18.106 [2024-11-21 00:18:08.282389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:18.106 [2024-11-21 00:18:08.282395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.106 [2024-11-21 00:18:08.282413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.106 [2024-11-21 00:18:08.282421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:18.106 [2024-11-21 00:18:08.282431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:18.106 [2024-11-21 00:18:08.282437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.106 [2024-11-21 00:18:08.282453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.106 [2024-11-21 00:18:08.282460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:18.106 [2024-11-21 00:18:08.282467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:18.106 [2024-11-21 00:18:08.282473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.106 [2024-11-21 00:18:08.282527] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.196 ms, result 0 00:29:18.107 true 00:29:18.107 00:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:18.107 { 00:29:18.107 "name": "ftl", 00:29:18.107 "properties": [ 00:29:18.107 { 00:29:18.107 "name": "superblock_version", 00:29:18.107 "value": 5, 00:29:18.107 "read-only": true 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "name": "base_device", 00:29:18.107 "bands": [ 00:29:18.107 { 00:29:18.107 "id": 0, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 1, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 2, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 3, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 4, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 5, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 6, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 7, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 8, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 9, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 10, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 11, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 12, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 13, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 14, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 15, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 16, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 17, 00:29:18.107 "state": "FREE", 00:29:18.107 "validity": 0.0 00:29:18.107 } 00:29:18.107 ], 00:29:18.107 "read-only": true 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "name": "cache_device", 00:29:18.107 "type": "bdev", 00:29:18.107 "chunks": [ 00:29:18.107 { 00:29:18.107 "id": 0, 00:29:18.107 "state": "INACTIVE", 00:29:18.107 "utilization": 0.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 1, 00:29:18.107 "state": "CLOSED", 00:29:18.107 "utilization": 1.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 2, 00:29:18.107 "state": "CLOSED", 00:29:18.107 "utilization": 1.0 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 3, 00:29:18.107 "state": "OPEN", 00:29:18.107 "utilization": 0.001953125 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "id": 4, 00:29:18.107 "state": "OPEN", 00:29:18.107 "utilization": 0.0 00:29:18.107 } 00:29:18.107 ], 00:29:18.107 "read-only": true 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "name": "verbose_mode", 00:29:18.107 "value": true, 00:29:18.107 "unit": "", 00:29:18.107 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:18.107 }, 00:29:18.107 { 00:29:18.107 "name": "prep_upgrade_on_shutdown", 00:29:18.107 "value": false, 00:29:18.107 "unit": "", 00:29:18.107 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:18.107 } 00:29:18.107 ] 00:29:18.107 } 00:29:18.107 00:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:18.365 [2024-11-21 00:18:08.702652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.365 [2024-11-21 00:18:08.702792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:18.365 [2024-11-21 00:18:08.702842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:18.365 [2024-11-21 00:18:08.702860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.365 [2024-11-21 00:18:08.702890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.365 [2024-11-21 00:18:08.702907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:18.366 [2024-11-21 00:18:08.702922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:18.366 [2024-11-21 00:18:08.702937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.366 [2024-11-21 00:18:08.702961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.366 [2024-11-21 00:18:08.702978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:18.366 [2024-11-21 00:18:08.702994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:18.366 [2024-11-21 00:18:08.703035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.366 [2024-11-21 00:18:08.703091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.426 ms, result 0 00:29:18.366 true 00:29:18.366 00:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:18.366 00:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:18.366 00:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:18.624 00:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:18.624 00:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:18.624 00:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:18.883 [2024-11-21 00:18:09.110998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.883 [2024-11-21 00:18:09.111027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:18.883 [2024-11-21 00:18:09.111035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:18.883 [2024-11-21 00:18:09.111040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.883 [2024-11-21 00:18:09.111056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.883 [2024-11-21 00:18:09.111062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:18.883 [2024-11-21 00:18:09.111068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:18.883 [2024-11-21 00:18:09.111073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.883 [2024-11-21 00:18:09.111087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:18.883 [2024-11-21 00:18:09.111093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:18.883 [2024-11-21 00:18:09.111098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:18.883 [2024-11-21 00:18:09.111104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:18.883 [2024-11-21 00:18:09.111140] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.133 ms, result 0 00:29:18.883 true 00:29:18.883 00:18:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:19.141 { 00:29:19.141 "name": "ftl", 00:29:19.141 "properties": [ 00:29:19.141 { 00:29:19.141 "name": "superblock_version", 00:29:19.141 "value": 5, 00:29:19.141 "read-only": true 00:29:19.141 }, 00:29:19.141 { 00:29:19.141 "name": "base_device", 00:29:19.141 "bands": [ 00:29:19.142 { 00:29:19.142 "id": 0, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 1, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 2, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 3, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 4, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 5, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 6, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 7, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 8, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 9, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 10, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 11, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 12, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 13, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 14, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 15, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 16, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 17, 00:29:19.142 "state": "FREE", 00:29:19.142 "validity": 0.0 00:29:19.142 } 00:29:19.142 ], 00:29:19.142 "read-only": true 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "name": "cache_device", 00:29:19.142 "type": "bdev", 00:29:19.142 "chunks": [ 00:29:19.142 { 00:29:19.142 "id": 0, 00:29:19.142 "state": "INACTIVE", 00:29:19.142 "utilization": 0.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 1, 00:29:19.142 "state": "CLOSED", 00:29:19.142 "utilization": 1.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 2, 00:29:19.142 "state": "CLOSED", 00:29:19.142 "utilization": 1.0 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 3, 00:29:19.142 "state": "OPEN", 00:29:19.142 "utilization": 0.001953125 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "id": 4, 00:29:19.142 "state": "OPEN", 00:29:19.142 "utilization": 0.0 00:29:19.142 } 00:29:19.142 ], 00:29:19.142 "read-only": true 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "name": "verbose_mode", 00:29:19.142 "value": true, 00:29:19.142 "unit": "", 00:29:19.142 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:19.142 }, 00:29:19.142 { 00:29:19.142 "name": "prep_upgrade_on_shutdown", 00:29:19.142 "value": true, 00:29:19.142 "unit": "", 00:29:19.142 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:19.142 } 00:29:19.142 ] 00:29:19.142 } 00:29:19.142 00:18:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:19.142 00:18:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92941 ]] 00:29:19.142 00:18:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92941 00:29:19.142 00:18:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92941 ']' 00:29:19.142 00:18:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92941 00:29:19.142 00:18:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:29:19.142 00:18:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:19.142 00:18:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92941 00:29:19.142 killing process with pid 92941 00:29:19.142 00:18:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:19.142 00:18:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:19.142 00:18:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92941' 00:29:19.142 00:18:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92941 00:29:19.142 00:18:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92941 00:29:19.142 [2024-11-21 00:18:09.479179] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:19.142 [2024-11-21 00:18:09.485631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.142 [2024-11-21 00:18:09.485734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:19.142 [2024-11-21 00:18:09.485748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:19.142 [2024-11-21 00:18:09.485755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.142 [2024-11-21 00:18:09.485779] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:19.142 [2024-11-21 00:18:09.486309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.142 [2024-11-21 00:18:09.486325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:19.142 [2024-11-21 00:18:09.486338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.516 ms 00:29:19.142 [2024-11-21 00:18:09.486344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.348 [2024-11-21 00:18:17.707757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.348 [2024-11-21 00:18:17.707821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:27.348 [2024-11-21 00:18:17.707834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8221.367 ms 00:29:27.348 [2024-11-21 00:18:17.707845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.348 [2024-11-21 00:18:17.709209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.348 [2024-11-21 00:18:17.709237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:27.348 [2024-11-21 00:18:17.709246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.351 ms 00:29:27.348 [2024-11-21 00:18:17.709252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.348 [2024-11-21 00:18:17.710156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.348 [2024-11-21 00:18:17.710174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:27.348 [2024-11-21 00:18:17.710183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.882 ms 00:29:27.348 [2024-11-21 00:18:17.710194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.348 [2024-11-21 00:18:17.712597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.348 [2024-11-21 00:18:17.712625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:27.348 [2024-11-21 00:18:17.712633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.372 ms 00:29:27.348 [2024-11-21 00:18:17.712640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.348 [2024-11-21 00:18:17.715368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.348 [2024-11-21 00:18:17.715403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:27.348 [2024-11-21 00:18:17.715411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.702 ms 00:29:27.348 [2024-11-21 00:18:17.715418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.348 [2024-11-21 00:18:17.715474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.348 [2024-11-21 00:18:17.715483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:27.348 [2024-11-21 00:18:17.715495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:27.349 [2024-11-21 00:18:17.715506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.717419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.349 [2024-11-21 00:18:17.717447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:27.349 [2024-11-21 00:18:17.717455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.901 ms 00:29:27.349 [2024-11-21 00:18:17.717461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.719451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.349 [2024-11-21 00:18:17.719585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:27.349 [2024-11-21 00:18:17.719598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.965 ms 00:29:27.349 [2024-11-21 00:18:17.719604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.720899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.349 [2024-11-21 00:18:17.720922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:27.349 [2024-11-21 00:18:17.720929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.271 ms 00:29:27.349 [2024-11-21 00:18:17.720935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.723042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.349 [2024-11-21 00:18:17.723143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:27.349 [2024-11-21 00:18:17.723155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.686 ms 00:29:27.349 [2024-11-21 00:18:17.723160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.723183] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:27.349 [2024-11-21 00:18:17.723195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:27.349 [2024-11-21 00:18:17.723203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:27.349 [2024-11-21 00:18:17.723209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:27.349 [2024-11-21 00:18:17.723216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:27.349 [2024-11-21 00:18:17.723320] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:27.349 [2024-11-21 00:18:17.723327] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 69dda36b-7eda-4c64-a11f-835234a8bb0d 00:29:27.349 [2024-11-21 00:18:17.723333] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:27.349 [2024-11-21 00:18:17.723340] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:27.349 [2024-11-21 00:18:17.723345] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:27.349 [2024-11-21 00:18:17.723352] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:27.349 [2024-11-21 00:18:17.723359] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:27.349 [2024-11-21 00:18:17.723370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:27.349 [2024-11-21 00:18:17.723376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:27.349 [2024-11-21 00:18:17.723382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:27.349 [2024-11-21 00:18:17.723387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:27.349 [2024-11-21 00:18:17.723399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.349 [2024-11-21 00:18:17.723405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:27.349 [2024-11-21 00:18:17.723413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.216 ms 00:29:27.349 [2024-11-21 00:18:17.723419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.725165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.349 [2024-11-21 00:18:17.725187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:27.349 [2024-11-21 00:18:17.725196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.729 ms 00:29:27.349 [2024-11-21 00:18:17.725205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.725292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.349 [2024-11-21 00:18:17.725310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:27.349 [2024-11-21 00:18:17.725318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:29:27.349 [2024-11-21 00:18:17.725325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.731466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:27.349 [2024-11-21 00:18:17.731576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:27.349 [2024-11-21 00:18:17.731626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:27.349 [2024-11-21 00:18:17.731644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.731681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:27.349 [2024-11-21 00:18:17.731697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:27.349 [2024-11-21 00:18:17.731713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:27.349 [2024-11-21 00:18:17.731734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.731796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:27.349 [2024-11-21 00:18:17.731816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:27.349 [2024-11-21 00:18:17.731833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:27.349 [2024-11-21 00:18:17.731887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.731913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:27.349 [2024-11-21 00:18:17.731929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:27.349 [2024-11-21 00:18:17.731945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:27.349 [2024-11-21 00:18:17.731960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.742618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:27.349 [2024-11-21 00:18:17.742730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:27.349 [2024-11-21 00:18:17.742772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:27.349 [2024-11-21 00:18:17.742795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.751401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:27.349 [2024-11-21 00:18:17.751513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:27.349 [2024-11-21 00:18:17.751554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:27.349 [2024-11-21 00:18:17.751572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.751646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:27.349 [2024-11-21 00:18:17.751665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:27.349 [2024-11-21 00:18:17.751683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:27.349 [2024-11-21 00:18:17.751698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.751754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:27.349 [2024-11-21 00:18:17.751773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:27.349 [2024-11-21 00:18:17.751789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:27.349 [2024-11-21 00:18:17.751843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.751927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:27.349 [2024-11-21 00:18:17.751947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:27.349 [2024-11-21 00:18:17.751962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:27.349 [2024-11-21 00:18:17.751977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.752013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:27.349 [2024-11-21 00:18:17.752034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:27.349 [2024-11-21 00:18:17.752050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:27.349 [2024-11-21 00:18:17.752064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.752111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:27.349 [2024-11-21 00:18:17.752155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:27.349 [2024-11-21 00:18:17.752173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:27.349 [2024-11-21 00:18:17.752188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.349 [2024-11-21 00:18:17.752322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:27.349 [2024-11-21 00:18:17.752348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:27.349 [2024-11-21 00:18:17.752365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:27.349 [2024-11-21 00:18:17.752380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.350 [2024-11-21 00:18:17.752543] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8266.856 ms, result 0 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93428 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93428 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93428 ']' 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:35.473 00:18:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:35.473 [2024-11-21 00:18:25.588802] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:35.473 [2024-11-21 00:18:25.589076] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93428 ] 00:29:35.473 [2024-11-21 00:18:25.722874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.473 [2024-11-21 00:18:25.764712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.732 [2024-11-21 00:18:26.059650] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:35.732 [2024-11-21 00:18:26.059708] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:35.991 [2024-11-21 00:18:26.205648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.991 [2024-11-21 00:18:26.205685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:35.991 [2024-11-21 00:18:26.205698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:35.991 [2024-11-21 00:18:26.205705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.991 [2024-11-21 00:18:26.205752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.991 [2024-11-21 00:18:26.205760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:35.991 [2024-11-21 00:18:26.205767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:35.991 [2024-11-21 00:18:26.205772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.991 [2024-11-21 00:18:26.205790] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:35.991 [2024-11-21 00:18:26.205977] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:35.991 [2024-11-21 00:18:26.205990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.991 [2024-11-21 00:18:26.205995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:35.991 [2024-11-21 00:18:26.206004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:29:35.991 [2024-11-21 00:18:26.206009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.991 [2024-11-21 00:18:26.207263] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:35.991 [2024-11-21 00:18:26.210168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.991 [2024-11-21 00:18:26.210204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:35.991 [2024-11-21 00:18:26.210212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.907 ms 00:29:35.991 [2024-11-21 00:18:26.210221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.991 [2024-11-21 00:18:26.210269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.991 [2024-11-21 00:18:26.210280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:35.991 [2024-11-21 00:18:26.210286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:35.991 [2024-11-21 00:18:26.210292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.991 [2024-11-21 00:18:26.216604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.991 [2024-11-21 00:18:26.216631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:35.991 [2024-11-21 00:18:26.216642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.252 ms 00:29:35.991 [2024-11-21 00:18:26.216647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.991 [2024-11-21 00:18:26.216685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.991 [2024-11-21 00:18:26.216695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:35.991 [2024-11-21 00:18:26.216701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:35.992 [2024-11-21 00:18:26.216707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.992 [2024-11-21 00:18:26.216739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.992 [2024-11-21 00:18:26.216747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:35.992 [2024-11-21 00:18:26.216756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:35.992 [2024-11-21 00:18:26.216766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.992 [2024-11-21 00:18:26.216782] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:35.992 [2024-11-21 00:18:26.218362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.992 [2024-11-21 00:18:26.218384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:35.992 [2024-11-21 00:18:26.218395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.584 ms 00:29:35.992 [2024-11-21 00:18:26.218401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.992 [2024-11-21 00:18:26.218427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.992 [2024-11-21 00:18:26.218435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:35.992 [2024-11-21 00:18:26.218444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:35.992 [2024-11-21 00:18:26.218452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.992 [2024-11-21 00:18:26.218468] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:35.992 [2024-11-21 00:18:26.218485] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:35.992 [2024-11-21 00:18:26.218516] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:35.992 [2024-11-21 00:18:26.218528] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:35.992 [2024-11-21 00:18:26.218610] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:35.992 [2024-11-21 00:18:26.218620] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:35.992 [2024-11-21 00:18:26.218632] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:35.992 [2024-11-21 00:18:26.218642] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:35.992 [2024-11-21 00:18:26.218650] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:35.992 [2024-11-21 00:18:26.218657] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:35.992 [2024-11-21 00:18:26.218665] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:35.992 [2024-11-21 00:18:26.218671] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:35.992 [2024-11-21 00:18:26.218678] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:35.992 [2024-11-21 00:18:26.218686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.992 [2024-11-21 00:18:26.218692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:35.992 [2024-11-21 00:18:26.218698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.220 ms 00:29:35.992 [2024-11-21 00:18:26.218704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.992 [2024-11-21 00:18:26.218776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.992 [2024-11-21 00:18:26.218782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:35.992 [2024-11-21 00:18:26.218792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:35.992 [2024-11-21 00:18:26.218798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.992 [2024-11-21 00:18:26.218882] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:35.992 [2024-11-21 00:18:26.218893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:35.992 [2024-11-21 00:18:26.218902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:35.992 [2024-11-21 00:18:26.218908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.992 [2024-11-21 00:18:26.218914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:35.992 [2024-11-21 00:18:26.218919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:35.992 [2024-11-21 00:18:26.218925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:35.992 [2024-11-21 00:18:26.218931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:35.992 [2024-11-21 00:18:26.218938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:35.992 [2024-11-21 00:18:26.218944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.992 [2024-11-21 00:18:26.218949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:35.992 [2024-11-21 00:18:26.218954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:35.992 [2024-11-21 00:18:26.218959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.992 [2024-11-21 00:18:26.218965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:35.992 [2024-11-21 00:18:26.218970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:35.992 [2024-11-21 00:18:26.218981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.992 [2024-11-21 00:18:26.218987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:35.992 [2024-11-21 00:18:26.218996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:35.992 [2024-11-21 00:18:26.219001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.992 [2024-11-21 00:18:26.219006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:35.992 [2024-11-21 00:18:26.219011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:35.992 [2024-11-21 00:18:26.219016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:35.992 [2024-11-21 00:18:26.219021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:35.992 [2024-11-21 00:18:26.219026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:35.992 [2024-11-21 00:18:26.219032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:35.992 [2024-11-21 00:18:26.219037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:35.992 [2024-11-21 00:18:26.219042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:35.992 [2024-11-21 00:18:26.219047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:35.992 [2024-11-21 00:18:26.219052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:35.992 [2024-11-21 00:18:26.219057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:35.992 [2024-11-21 00:18:26.219062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:35.992 [2024-11-21 00:18:26.219067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:35.992 [2024-11-21 00:18:26.219072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:35.992 [2024-11-21 00:18:26.219079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.992 [2024-11-21 00:18:26.219084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:35.992 [2024-11-21 00:18:26.219089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:35.992 [2024-11-21 00:18:26.219094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.992 [2024-11-21 00:18:26.219099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:35.992 [2024-11-21 00:18:26.219104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:35.992 [2024-11-21 00:18:26.219109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.992 [2024-11-21 00:18:26.219114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:35.992 [2024-11-21 00:18:26.219119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:35.992 [2024-11-21 00:18:26.219124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.992 [2024-11-21 00:18:26.219129] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:35.992 [2024-11-21 00:18:26.219137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:35.992 [2024-11-21 00:18:26.219142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:35.992 [2024-11-21 00:18:26.219148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:35.992 [2024-11-21 00:18:26.219155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:35.992 [2024-11-21 00:18:26.219161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:35.992 [2024-11-21 00:18:26.219168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:35.992 [2024-11-21 00:18:26.219174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:35.992 [2024-11-21 00:18:26.219179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:35.992 [2024-11-21 00:18:26.219185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:35.992 [2024-11-21 00:18:26.219192] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:35.992 [2024-11-21 00:18:26.219199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:35.992 [2024-11-21 00:18:26.219208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:35.992 [2024-11-21 00:18:26.219214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:35.992 [2024-11-21 00:18:26.219219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:35.992 [2024-11-21 00:18:26.219225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:35.992 [2024-11-21 00:18:26.219231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:35.992 [2024-11-21 00:18:26.219237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:35.992 [2024-11-21 00:18:26.219242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:35.992 [2024-11-21 00:18:26.219248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:35.992 [2024-11-21 00:18:26.219254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:35.992 [2024-11-21 00:18:26.219259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:35.992 [2024-11-21 00:18:26.219266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:35.992 [2024-11-21 00:18:26.219272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:35.993 [2024-11-21 00:18:26.219277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:35.993 [2024-11-21 00:18:26.219284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:35.993 [2024-11-21 00:18:26.219289] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:35.993 [2024-11-21 00:18:26.219327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:35.993 [2024-11-21 00:18:26.219333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:35.993 [2024-11-21 00:18:26.219340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:35.993 [2024-11-21 00:18:26.219345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:35.993 [2024-11-21 00:18:26.219351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:35.993 [2024-11-21 00:18:26.219362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.993 [2024-11-21 00:18:26.219368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:35.993 [2024-11-21 00:18:26.219374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.534 ms 00:29:35.993 [2024-11-21 00:18:26.219380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.993 [2024-11-21 00:18:26.219416] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:35.993 [2024-11-21 00:18:26.219424] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:39.280 [2024-11-21 00:18:29.505757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.505848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:39.280 [2024-11-21 00:18:29.505866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3286.327 ms 00:29:39.280 [2024-11-21 00:18:29.505877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.524316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.524379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:39.280 [2024-11-21 00:18:29.524394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.266 ms 00:29:39.280 [2024-11-21 00:18:29.524403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.524490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.524501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:39.280 [2024-11-21 00:18:29.524511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:39.280 [2024-11-21 00:18:29.524521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.548132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.548208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:39.280 [2024-11-21 00:18:29.548223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.563 ms 00:29:39.280 [2024-11-21 00:18:29.548233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.548280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.548330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:39.280 [2024-11-21 00:18:29.548341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:39.280 [2024-11-21 00:18:29.548351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.549098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.549149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:39.280 [2024-11-21 00:18:29.549161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.672 ms 00:29:39.280 [2024-11-21 00:18:29.549171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.549229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.549248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:39.280 [2024-11-21 00:18:29.549263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:39.280 [2024-11-21 00:18:29.549272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.560932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.560986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:39.280 [2024-11-21 00:18:29.560998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.635 ms 00:29:39.280 [2024-11-21 00:18:29.561007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.565745] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:39.280 [2024-11-21 00:18:29.565799] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:39.280 [2024-11-21 00:18:29.565814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.565835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:39.280 [2024-11-21 00:18:29.565845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.687 ms 00:29:39.280 [2024-11-21 00:18:29.565853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.570981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.571259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:39.280 [2024-11-21 00:18:29.571290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.069 ms 00:29:39.280 [2024-11-21 00:18:29.571320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.574105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.574153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:39.280 [2024-11-21 00:18:29.574165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.731 ms 00:29:39.280 [2024-11-21 00:18:29.574174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.576972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.577143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:39.280 [2024-11-21 00:18:29.577162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.742 ms 00:29:39.280 [2024-11-21 00:18:29.577170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.577657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.577695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:39.280 [2024-11-21 00:18:29.577713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.271 ms 00:29:39.280 [2024-11-21 00:18:29.577723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.608717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.608941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:39.280 [2024-11-21 00:18:29.608971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.967 ms 00:29:39.280 [2024-11-21 00:18:29.608980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.618040] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:39.280 [2024-11-21 00:18:29.619103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.619144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:39.280 [2024-11-21 00:18:29.619158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.004 ms 00:29:39.280 [2024-11-21 00:18:29.619173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.619290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.619328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:39.280 [2024-11-21 00:18:29.619339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:39.280 [2024-11-21 00:18:29.619347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.619416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.619427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:39.280 [2024-11-21 00:18:29.619437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:39.280 [2024-11-21 00:18:29.619447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.619503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.619514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:39.280 [2024-11-21 00:18:29.619524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:39.280 [2024-11-21 00:18:29.619534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.619576] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:39.280 [2024-11-21 00:18:29.619589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.619599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:39.280 [2024-11-21 00:18:29.619609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:39.280 [2024-11-21 00:18:29.619619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.625008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.625069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:39.280 [2024-11-21 00:18:29.625082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.360 ms 00:29:39.280 [2024-11-21 00:18:29.625091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.625186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.280 [2024-11-21 00:18:29.625202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:39.280 [2024-11-21 00:18:29.625212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:39.280 [2024-11-21 00:18:29.625220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.280 [2024-11-21 00:18:29.626668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3420.399 ms, result 0 00:29:39.280 [2024-11-21 00:18:29.639887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.280 [2024-11-21 00:18:29.655899] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:39.280 [2024-11-21 00:18:29.664079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:39.541 00:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:39.541 00:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:29:39.541 00:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:39.541 00:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:39.541 00:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:39.799 [2024-11-21 00:18:30.044233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.799 [2024-11-21 00:18:30.044282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:39.799 [2024-11-21 00:18:30.044322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:39.799 [2024-11-21 00:18:30.044332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.799 [2024-11-21 00:18:30.044357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.799 [2024-11-21 00:18:30.044367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:39.799 [2024-11-21 00:18:30.044375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:39.799 [2024-11-21 00:18:30.044384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.799 [2024-11-21 00:18:30.044408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.799 [2024-11-21 00:18:30.044418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:39.799 [2024-11-21 00:18:30.044426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:39.799 [2024-11-21 00:18:30.044434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.799 [2024-11-21 00:18:30.044491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.256 ms, result 0 00:29:39.799 true 00:29:39.799 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:40.060 { 00:29:40.060 "name": "ftl", 00:29:40.060 "properties": [ 00:29:40.060 { 00:29:40.060 "name": "superblock_version", 00:29:40.060 "value": 5, 00:29:40.060 "read-only": true 00:29:40.060 }, 00:29:40.060 { 00:29:40.060 "name": "base_device", 00:29:40.060 "bands": [ 00:29:40.060 { 00:29:40.060 "id": 0, 00:29:40.060 "state": "CLOSED", 00:29:40.060 "validity": 1.0 00:29:40.060 }, 00:29:40.060 { 00:29:40.060 "id": 1, 00:29:40.060 "state": "CLOSED", 00:29:40.060 "validity": 1.0 00:29:40.060 }, 00:29:40.060 { 00:29:40.060 "id": 2, 00:29:40.060 "state": "CLOSED", 00:29:40.060 "validity": 0.007843137254901933 00:29:40.060 }, 00:29:40.060 { 00:29:40.060 "id": 3, 00:29:40.060 "state": "FREE", 00:29:40.060 "validity": 0.0 00:29:40.060 }, 00:29:40.060 { 00:29:40.060 "id": 4, 00:29:40.060 "state": "FREE", 00:29:40.060 "validity": 0.0 00:29:40.060 }, 00:29:40.060 { 00:29:40.060 "id": 5, 00:29:40.060 "state": "FREE", 00:29:40.060 "validity": 0.0 00:29:40.060 }, 00:29:40.060 { 00:29:40.060 "id": 6, 00:29:40.060 "state": "FREE", 00:29:40.060 "validity": 0.0 00:29:40.060 }, 00:29:40.060 { 00:29:40.060 "id": 7, 00:29:40.060 "state": "FREE", 00:29:40.060 "validity": 0.0 00:29:40.060 }, 00:29:40.060 { 00:29:40.060 "id": 8, 00:29:40.060 "state": "FREE", 00:29:40.060 "validity": 0.0 00:29:40.060 }, 00:29:40.060 { 00:29:40.060 "id": 9, 00:29:40.060 "state": "FREE", 00:29:40.060 "validity": 0.0 00:29:40.060 }, 00:29:40.060 { 00:29:40.060 "id": 10, 00:29:40.060 "state": "FREE", 00:29:40.060 "validity": 0.0 00:29:40.060 }, 00:29:40.060 { 00:29:40.060 "id": 11, 00:29:40.060 "state": "FREE", 00:29:40.061 "validity": 0.0 00:29:40.061 }, 00:29:40.061 { 00:29:40.061 "id": 12, 00:29:40.061 "state": "FREE", 00:29:40.061 "validity": 0.0 00:29:40.061 }, 00:29:40.061 { 00:29:40.061 "id": 13, 00:29:40.061 "state": "FREE", 00:29:40.061 "validity": 0.0 00:29:40.061 }, 00:29:40.061 { 00:29:40.061 "id": 14, 00:29:40.061 "state": "FREE", 00:29:40.061 "validity": 0.0 00:29:40.061 }, 00:29:40.061 { 00:29:40.061 "id": 15, 00:29:40.061 "state": "FREE", 00:29:40.061 "validity": 0.0 00:29:40.061 }, 00:29:40.061 { 00:29:40.061 "id": 16, 00:29:40.061 "state": "FREE", 00:29:40.061 "validity": 0.0 00:29:40.061 }, 00:29:40.061 { 00:29:40.061 "id": 17, 00:29:40.061 "state": "FREE", 00:29:40.061 "validity": 0.0 00:29:40.061 } 00:29:40.061 ], 00:29:40.061 "read-only": true 00:29:40.061 }, 00:29:40.061 { 00:29:40.061 "name": "cache_device", 00:29:40.061 "type": "bdev", 00:29:40.061 "chunks": [ 00:29:40.061 { 00:29:40.061 "id": 0, 00:29:40.061 "state": "INACTIVE", 00:29:40.061 "utilization": 0.0 00:29:40.061 }, 00:29:40.061 { 00:29:40.061 "id": 1, 00:29:40.061 "state": "OPEN", 00:29:40.061 "utilization": 0.0 00:29:40.061 }, 00:29:40.061 { 00:29:40.061 "id": 2, 00:29:40.061 "state": "OPEN", 00:29:40.061 "utilization": 0.0 00:29:40.061 }, 00:29:40.061 { 00:29:40.061 "id": 3, 00:29:40.061 "state": "FREE", 00:29:40.061 "utilization": 0.0 00:29:40.061 }, 00:29:40.061 { 00:29:40.061 "id": 4, 00:29:40.061 "state": "FREE", 00:29:40.061 "utilization": 0.0 00:29:40.061 } 00:29:40.061 ], 00:29:40.061 "read-only": true 00:29:40.061 }, 00:29:40.061 { 00:29:40.061 "name": "verbose_mode", 00:29:40.061 "value": true, 00:29:40.061 "unit": "", 00:29:40.061 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:40.061 }, 00:29:40.061 { 00:29:40.061 "name": "prep_upgrade_on_shutdown", 00:29:40.061 "value": false, 00:29:40.061 "unit": "", 00:29:40.061 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:40.061 } 00:29:40.061 ] 00:29:40.061 } 00:29:40.061 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:40.061 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:40.061 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:40.061 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:40.061 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:40.061 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:40.061 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:40.061 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:40.320 Validate MD5 checksum, iteration 1 00:29:40.320 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:40.320 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:40.320 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:40.320 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:40.320 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:40.320 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:40.320 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:40.320 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:40.320 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:40.320 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:40.320 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:40.320 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:40.320 00:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:40.320 [2024-11-21 00:18:30.726422] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:40.320 [2024-11-21 00:18:30.726666] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93497 ] 00:29:40.579 [2024-11-21 00:18:30.863805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.579 [2024-11-21 00:18:30.897254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.964  [2024-11-21T00:18:33.325Z] Copying: 492/1024 [MB] (492 MBps) [2024-11-21T00:18:33.325Z] Copying: 1019/1024 [MB] (527 MBps) [2024-11-21T00:18:33.892Z] Copying: 1024/1024 [MB] (average 509 MBps) 00:29:43.471 00:29:43.471 00:18:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:43.471 00:18:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:45.374 00:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:45.374 00:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=488277fe57f6d8b86f20376f504256ba 00:29:45.374 Validate MD5 checksum, iteration 2 00:29:45.374 00:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 488277fe57f6d8b86f20376f504256ba != \4\8\8\2\7\7\f\e\5\7\f\6\d\8\b\8\6\f\2\0\3\7\6\f\5\0\4\2\5\6\b\a ]] 00:29:45.374 00:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:45.374 00:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:45.374 00:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:45.374 00:18:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:45.374 00:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:45.374 00:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:45.374 00:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:45.374 00:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:45.374 00:18:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:45.374 [2024-11-21 00:18:35.760100] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:45.374 [2024-11-21 00:18:35.760189] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93553 ] 00:29:45.633 [2024-11-21 00:18:35.893067] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.633 [2024-11-21 00:18:35.926166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.014  [2024-11-21T00:18:38.006Z] Copying: 657/1024 [MB] (657 MBps) [2024-11-21T00:18:38.574Z] Copying: 1024/1024 [MB] (average 586 MBps) 00:29:48.153 00:29:48.153 00:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:48.153 00:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d26f55796c02748cbfd4e1eff565f419 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d26f55796c02748cbfd4e1eff565f419 != \d\2\6\f\5\5\7\9\6\c\0\2\7\4\8\c\b\f\d\4\e\1\e\f\f\5\6\5\f\4\1\9 ]] 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 93428 ]] 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 93428 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93607 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93607 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93607 ']' 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:50.054 00:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:50.054 [2024-11-21 00:18:40.286553] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:50.054 [2024-11-21 00:18:40.286666] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93607 ] 00:29:50.054 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 93428 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:50.054 [2024-11-21 00:18:40.420948] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.054 [2024-11-21 00:18:40.461270] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.624 [2024-11-21 00:18:40.755205] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:50.624 [2024-11-21 00:18:40.755271] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:50.624 [2024-11-21 00:18:40.897667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.624 [2024-11-21 00:18:40.897703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:50.624 [2024-11-21 00:18:40.897716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:50.624 [2024-11-21 00:18:40.897723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.624 [2024-11-21 00:18:40.897764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.624 [2024-11-21 00:18:40.897772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:50.624 [2024-11-21 00:18:40.897779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:50.624 [2024-11-21 00:18:40.897786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.624 [2024-11-21 00:18:40.897807] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:50.624 [2024-11-21 00:18:40.897983] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:50.624 [2024-11-21 00:18:40.897994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.624 [2024-11-21 00:18:40.898003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:50.624 [2024-11-21 00:18:40.898013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:29:50.624 [2024-11-21 00:18:40.898019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.624 [2024-11-21 00:18:40.898238] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:50.624 [2024-11-21 00:18:40.901959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.624 [2024-11-21 00:18:40.901993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:50.624 [2024-11-21 00:18:40.902001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.721 ms 00:29:50.624 [2024-11-21 00:18:40.902012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.624 [2024-11-21 00:18:40.902933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.624 [2024-11-21 00:18:40.902960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:50.624 [2024-11-21 00:18:40.902971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:50.624 [2024-11-21 00:18:40.902977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.624 [2024-11-21 00:18:40.903193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.624 [2024-11-21 00:18:40.903202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:50.624 [2024-11-21 00:18:40.903211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.169 ms 00:29:50.624 [2024-11-21 00:18:40.903218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.624 [2024-11-21 00:18:40.903247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.624 [2024-11-21 00:18:40.903254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:50.624 [2024-11-21 00:18:40.903263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:50.624 [2024-11-21 00:18:40.903270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.624 [2024-11-21 00:18:40.903290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.624 [2024-11-21 00:18:40.903312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:50.624 [2024-11-21 00:18:40.903319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:50.624 [2024-11-21 00:18:40.903326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.624 [2024-11-21 00:18:40.903348] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:50.624 [2024-11-21 00:18:40.904055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.624 [2024-11-21 00:18:40.904068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:50.624 [2024-11-21 00:18:40.904075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.713 ms 00:29:50.624 [2024-11-21 00:18:40.904082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.624 [2024-11-21 00:18:40.904104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.624 [2024-11-21 00:18:40.904116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:50.624 [2024-11-21 00:18:40.904122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:50.624 [2024-11-21 00:18:40.904133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.624 [2024-11-21 00:18:40.904149] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:50.624 [2024-11-21 00:18:40.904164] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:50.624 [2024-11-21 00:18:40.904196] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:50.624 [2024-11-21 00:18:40.904210] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:50.624 [2024-11-21 00:18:40.904310] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:50.624 [2024-11-21 00:18:40.904320] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:50.624 [2024-11-21 00:18:40.904333] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:50.624 [2024-11-21 00:18:40.904342] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:50.624 [2024-11-21 00:18:40.904349] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:50.624 [2024-11-21 00:18:40.904361] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:50.624 [2024-11-21 00:18:40.904368] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:50.624 [2024-11-21 00:18:40.904374] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:50.624 [2024-11-21 00:18:40.904382] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:50.624 [2024-11-21 00:18:40.904389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.624 [2024-11-21 00:18:40.904395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:50.624 [2024-11-21 00:18:40.904404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.242 ms 00:29:50.624 [2024-11-21 00:18:40.904410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.624 [2024-11-21 00:18:40.904480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.624 [2024-11-21 00:18:40.904487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:50.624 [2024-11-21 00:18:40.904493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:29:50.624 [2024-11-21 00:18:40.904499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.624 [2024-11-21 00:18:40.904582] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:50.624 [2024-11-21 00:18:40.904591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:50.624 [2024-11-21 00:18:40.904598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:50.624 [2024-11-21 00:18:40.904604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:50.624 [2024-11-21 00:18:40.904610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:50.624 [2024-11-21 00:18:40.904617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:50.624 [2024-11-21 00:18:40.904623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:50.624 [2024-11-21 00:18:40.904628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:50.624 [2024-11-21 00:18:40.904634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:50.624 [2024-11-21 00:18:40.904641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:50.624 [2024-11-21 00:18:40.904646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:50.624 [2024-11-21 00:18:40.904652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:50.624 [2024-11-21 00:18:40.904657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:50.624 [2024-11-21 00:18:40.904663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:50.624 [2024-11-21 00:18:40.904668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:50.625 [2024-11-21 00:18:40.904674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:50.625 [2024-11-21 00:18:40.904684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:50.625 [2024-11-21 00:18:40.904689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:50.625 [2024-11-21 00:18:40.904694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:50.625 [2024-11-21 00:18:40.904700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:50.625 [2024-11-21 00:18:40.904705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:50.625 [2024-11-21 00:18:40.904711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:50.625 [2024-11-21 00:18:40.904717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:50.625 [2024-11-21 00:18:40.904723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:50.625 [2024-11-21 00:18:40.904729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:50.625 [2024-11-21 00:18:40.904735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:50.625 [2024-11-21 00:18:40.904741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:50.625 [2024-11-21 00:18:40.904746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:50.625 [2024-11-21 00:18:40.904753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:50.625 [2024-11-21 00:18:40.904760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:50.625 [2024-11-21 00:18:40.904765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:50.625 [2024-11-21 00:18:40.904771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:50.625 [2024-11-21 00:18:40.904779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:50.625 [2024-11-21 00:18:40.904785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:50.625 [2024-11-21 00:18:40.904791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:50.625 [2024-11-21 00:18:40.904796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:50.625 [2024-11-21 00:18:40.904802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:50.625 [2024-11-21 00:18:40.904808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:50.625 [2024-11-21 00:18:40.904814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:50.625 [2024-11-21 00:18:40.904819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:50.625 [2024-11-21 00:18:40.904825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:50.625 [2024-11-21 00:18:40.904834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:50.625 [2024-11-21 00:18:40.904840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:50.625 [2024-11-21 00:18:40.904845] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:50.625 [2024-11-21 00:18:40.904852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:50.625 [2024-11-21 00:18:40.904860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:50.625 [2024-11-21 00:18:40.904867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:50.625 [2024-11-21 00:18:40.904874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:50.625 [2024-11-21 00:18:40.904882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:50.625 [2024-11-21 00:18:40.904889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:50.625 [2024-11-21 00:18:40.904895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:50.625 [2024-11-21 00:18:40.904901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:50.625 [2024-11-21 00:18:40.904906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:50.625 [2024-11-21 00:18:40.904914] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:50.625 [2024-11-21 00:18:40.904922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:50.625 [2024-11-21 00:18:40.904929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:50.625 [2024-11-21 00:18:40.904935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:50.625 [2024-11-21 00:18:40.904941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:50.625 [2024-11-21 00:18:40.904947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:50.625 [2024-11-21 00:18:40.904954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:50.625 [2024-11-21 00:18:40.904960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:50.625 [2024-11-21 00:18:40.904967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:50.625 [2024-11-21 00:18:40.904973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:50.625 [2024-11-21 00:18:40.904980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:50.625 [2024-11-21 00:18:40.904988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:50.625 [2024-11-21 00:18:40.904994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:50.625 [2024-11-21 00:18:40.905001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:50.625 [2024-11-21 00:18:40.905007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:50.625 [2024-11-21 00:18:40.905013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:50.625 [2024-11-21 00:18:40.905019] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:50.625 [2024-11-21 00:18:40.905027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:50.625 [2024-11-21 00:18:40.905033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:50.625 [2024-11-21 00:18:40.905040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:50.625 [2024-11-21 00:18:40.905055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:50.625 [2024-11-21 00:18:40.905062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:50.625 [2024-11-21 00:18:40.905068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.625 [2024-11-21 00:18:40.905075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:50.625 [2024-11-21 00:18:40.905082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:29:50.625 [2024-11-21 00:18:40.905088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.625 [2024-11-21 00:18:40.913490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.625 [2024-11-21 00:18:40.913610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:50.625 [2024-11-21 00:18:40.913660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.361 ms 00:29:50.625 [2024-11-21 00:18:40.913793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.625 [2024-11-21 00:18:40.913844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.625 [2024-11-21 00:18:40.913868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:50.625 [2024-11-21 00:18:40.913933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:50.625 [2024-11-21 00:18:40.913976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.625 [2024-11-21 00:18:40.930753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.625 [2024-11-21 00:18:40.930865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:50.625 [2024-11-21 00:18:40.930962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.720 ms 00:29:50.625 [2024-11-21 00:18:40.931003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.625 [2024-11-21 00:18:40.931115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.626 [2024-11-21 00:18:40.931179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:50.626 [2024-11-21 00:18:40.931261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:50.626 [2024-11-21 00:18:40.931315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.626 [2024-11-21 00:18:40.931519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.626 [2024-11-21 00:18:40.931630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:50.626 [2024-11-21 00:18:40.931702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:29:50.626 [2024-11-21 00:18:40.931737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.626 [2024-11-21 00:18:40.931868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.626 [2024-11-21 00:18:40.931946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:50.626 [2024-11-21 00:18:40.932015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:29:50.626 [2024-11-21 00:18:40.932082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.626 [2024-11-21 00:18:40.939946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.626 [2024-11-21 00:18:40.940084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:50.626 [2024-11-21 00:18:40.940164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.806 ms 00:29:50.626 [2024-11-21 00:18:40.940199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.626 [2024-11-21 00:18:40.940393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.626 [2024-11-21 00:18:40.940491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:50.626 [2024-11-21 00:18:40.940561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:50.626 [2024-11-21 00:18:40.940641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.626 [2024-11-21 00:18:40.945308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.626 [2024-11-21 00:18:40.945398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:50.626 [2024-11-21 00:18:40.945441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.615 ms 00:29:50.626 [2024-11-21 00:18:40.945493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.626 [2024-11-21 00:18:40.946638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.626 [2024-11-21 00:18:40.946723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:50.626 [2024-11-21 00:18:40.946767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:29:50.626 [2024-11-21 00:18:40.946786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.626 [2024-11-21 00:18:40.963322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.626 [2024-11-21 00:18:40.963435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:50.626 [2024-11-21 00:18:40.963482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.493 ms 00:29:50.626 [2024-11-21 00:18:40.963500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.626 [2024-11-21 00:18:40.963629] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:50.626 [2024-11-21 00:18:40.963761] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:50.626 [2024-11-21 00:18:40.963900] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:50.626 [2024-11-21 00:18:40.964021] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:50.626 [2024-11-21 00:18:40.964047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.626 [2024-11-21 00:18:40.964063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:50.626 [2024-11-21 00:18:40.964080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.497 ms 00:29:50.626 [2024-11-21 00:18:40.964094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.626 [2024-11-21 00:18:40.964135] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:50.626 [2024-11-21 00:18:40.964205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.626 [2024-11-21 00:18:40.964222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:50.626 [2024-11-21 00:18:40.964243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:29:50.626 [2024-11-21 00:18:40.964258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.626 [2024-11-21 00:18:40.967456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.626 [2024-11-21 00:18:40.967544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:50.626 [2024-11-21 00:18:40.967585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.162 ms 00:29:50.626 [2024-11-21 00:18:40.967594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.626 [2024-11-21 00:18:40.968175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.626 [2024-11-21 00:18:40.968198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:50.626 [2024-11-21 00:18:40.968207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:50.626 [2024-11-21 00:18:40.968213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.626 [2024-11-21 00:18:40.968268] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:50.626 [2024-11-21 00:18:40.968454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.626 [2024-11-21 00:18:40.968464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:50.626 [2024-11-21 00:18:40.968473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:29:50.626 [2024-11-21 00:18:40.968484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.192 [2024-11-21 00:18:41.524922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.192 [2024-11-21 00:18:41.524972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:51.192 [2024-11-21 00:18:41.524984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 556.189 ms 00:29:51.192 [2024-11-21 00:18:41.524992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.192 [2024-11-21 00:18:41.526855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.192 [2024-11-21 00:18:41.526955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:51.192 [2024-11-21 00:18:41.527006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.377 ms 00:29:51.192 [2024-11-21 00:18:41.527024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.192 [2024-11-21 00:18:41.527574] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:51.192 [2024-11-21 00:18:41.527630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.192 [2024-11-21 00:18:41.527689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:51.192 [2024-11-21 00:18:41.527709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.568 ms 00:29:51.192 [2024-11-21 00:18:41.527761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.192 [2024-11-21 00:18:41.527798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.192 [2024-11-21 00:18:41.527817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:51.192 [2024-11-21 00:18:41.527857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:51.192 [2024-11-21 00:18:41.527865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.192 [2024-11-21 00:18:41.527899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 559.630 ms, result 0 00:29:51.192 [2024-11-21 00:18:41.527943] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:51.192 [2024-11-21 00:18:41.528059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.192 [2024-11-21 00:18:41.528070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:51.192 [2024-11-21 00:18:41.528077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.117 ms 00:29:51.192 [2024-11-21 00:18:41.528082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.758 [2024-11-21 00:18:42.063719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.758 [2024-11-21 00:18:42.063755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:51.758 [2024-11-21 00:18:42.063764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 535.268 ms 00:29:51.758 [2024-11-21 00:18:42.063771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.758 [2024-11-21 00:18:42.064998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.758 [2024-11-21 00:18:42.065028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:51.758 [2024-11-21 00:18:42.065037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.911 ms 00:29:51.758 [2024-11-21 00:18:42.065043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.758 [2024-11-21 00:18:42.065394] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:51.758 [2024-11-21 00:18:42.065417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.758 [2024-11-21 00:18:42.065424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:51.758 [2024-11-21 00:18:42.065432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.352 ms 00:29:51.758 [2024-11-21 00:18:42.065438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.758 [2024-11-21 00:18:42.065461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.758 [2024-11-21 00:18:42.065468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:51.758 [2024-11-21 00:18:42.065474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:51.758 [2024-11-21 00:18:42.065479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.758 [2024-11-21 00:18:42.065507] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 537.567 ms, result 0 00:29:51.758 [2024-11-21 00:18:42.065542] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:51.758 [2024-11-21 00:18:42.065550] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:51.758 [2024-11-21 00:18:42.065557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.759 [2024-11-21 00:18:42.065564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:51.759 [2024-11-21 00:18:42.065570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1097.303 ms 00:29:51.759 [2024-11-21 00:18:42.065577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.759 [2024-11-21 00:18:42.065601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.759 [2024-11-21 00:18:42.065608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:51.759 [2024-11-21 00:18:42.065617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:51.759 [2024-11-21 00:18:42.065623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.759 [2024-11-21 00:18:42.072257] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:51.759 [2024-11-21 00:18:42.072461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.759 [2024-11-21 00:18:42.072475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:51.759 [2024-11-21 00:18:42.072483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.826 ms 00:29:51.759 [2024-11-21 00:18:42.072489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.759 [2024-11-21 00:18:42.073027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.759 [2024-11-21 00:18:42.073046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:51.759 [2024-11-21 00:18:42.073054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.483 ms 00:29:51.759 [2024-11-21 00:18:42.073061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.759 [2024-11-21 00:18:42.074733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.759 [2024-11-21 00:18:42.074751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:51.759 [2024-11-21 00:18:42.074759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.658 ms 00:29:51.759 [2024-11-21 00:18:42.074765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.759 [2024-11-21 00:18:42.074797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.759 [2024-11-21 00:18:42.074804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:51.759 [2024-11-21 00:18:42.074810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:51.759 [2024-11-21 00:18:42.074816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.759 [2024-11-21 00:18:42.074897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.759 [2024-11-21 00:18:42.074911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:51.759 [2024-11-21 00:18:42.074918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:51.759 [2024-11-21 00:18:42.074924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.759 [2024-11-21 00:18:42.074944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.759 [2024-11-21 00:18:42.074950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:51.759 [2024-11-21 00:18:42.074957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:51.759 [2024-11-21 00:18:42.074968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.759 [2024-11-21 00:18:42.074993] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:51.759 [2024-11-21 00:18:42.075001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.759 [2024-11-21 00:18:42.075010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:51.759 [2024-11-21 00:18:42.075016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:51.759 [2024-11-21 00:18:42.075022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.759 [2024-11-21 00:18:42.075066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.759 [2024-11-21 00:18:42.075110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:51.759 [2024-11-21 00:18:42.075116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:29:51.759 [2024-11-21 00:18:42.075122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.759 [2024-11-21 00:18:42.076017] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1177.972 ms, result 0 00:29:51.759 [2024-11-21 00:18:42.088852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.759 [2024-11-21 00:18:42.104853] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:51.759 [2024-11-21 00:18:42.112959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:52.695 Validate MD5 checksum, iteration 1 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:52.695 00:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:52.695 [2024-11-21 00:18:42.829457] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:52.695 [2024-11-21 00:18:42.829563] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93642 ] 00:29:52.695 [2024-11-21 00:18:42.965183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.695 [2024-11-21 00:18:42.997576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.077  [2024-11-21T00:18:45.442Z] Copying: 528/1024 [MB] (528 MBps) [2024-11-21T00:18:47.986Z] Copying: 1024/1024 [MB] (average 541 MBps) 00:29:57.565 00:29:57.565 00:18:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:57.565 00:18:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:59.474 00:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:59.474 Validate MD5 checksum, iteration 2 00:29:59.474 00:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=488277fe57f6d8b86f20376f504256ba 00:29:59.474 00:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 488277fe57f6d8b86f20376f504256ba != \4\8\8\2\7\7\f\e\5\7\f\6\d\8\b\8\6\f\2\0\3\7\6\f\5\0\4\2\5\6\b\a ]] 00:29:59.474 00:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:59.474 00:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:59.474 00:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:59.474 00:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:59.474 00:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:59.474 00:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:59.474 00:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:59.474 00:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:59.474 00:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:59.732 [2024-11-21 00:18:49.934397] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:59.732 [2024-11-21 00:18:49.934502] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93714 ] 00:29:59.732 [2024-11-21 00:18:50.071950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.732 [2024-11-21 00:18:50.105534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.116  [2024-11-21T00:18:52.479Z] Copying: 576/1024 [MB] (576 MBps) [2024-11-21T00:18:55.113Z] Copying: 1024/1024 [MB] (average 542 MBps) 00:30:04.692 00:30:04.692 00:18:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:04.692 00:18:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:06.592 00:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:06.592 00:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d26f55796c02748cbfd4e1eff565f419 00:30:06.592 00:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d26f55796c02748cbfd4e1eff565f419 != \d\2\6\f\5\5\7\9\6\c\0\2\7\4\8\c\b\f\d\4\e\1\e\f\f\5\6\5\f\4\1\9 ]] 00:30:06.592 00:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:06.592 00:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:06.592 00:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:06.592 00:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:06.592 00:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:06.592 00:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93607 ]] 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93607 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 93607 ']' 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 93607 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93607 00:30:06.852 killing process with pid 93607 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93607' 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 93607 00:30:06.852 00:18:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 93607 00:30:06.852 [2024-11-21 00:18:57.197817] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:06.852 [2024-11-21 00:18:57.201715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.852 [2024-11-21 00:18:57.201751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:06.852 [2024-11-21 00:18:57.201763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:06.852 [2024-11-21 00:18:57.201770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.852 [2024-11-21 00:18:57.201788] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:06.852 [2024-11-21 00:18:57.202319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.852 [2024-11-21 00:18:57.202337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:06.852 [2024-11-21 00:18:57.202346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.521 ms 00:30:06.852 [2024-11-21 00:18:57.202357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.852 [2024-11-21 00:18:57.202554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.852 [2024-11-21 00:18:57.202562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:06.852 [2024-11-21 00:18:57.202573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.174 ms 00:30:06.853 [2024-11-21 00:18:57.202579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.204201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.853 [2024-11-21 00:18:57.204227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:06.853 [2024-11-21 00:18:57.204235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.608 ms 00:30:06.853 [2024-11-21 00:18:57.204241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.205125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.853 [2024-11-21 00:18:57.205149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:06.853 [2024-11-21 00:18:57.205157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.850 ms 00:30:06.853 [2024-11-21 00:18:57.205164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.207473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.853 [2024-11-21 00:18:57.207502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:06.853 [2024-11-21 00:18:57.207510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.280 ms 00:30:06.853 [2024-11-21 00:18:57.207516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.208841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.853 [2024-11-21 00:18:57.208872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:06.853 [2024-11-21 00:18:57.208880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.295 ms 00:30:06.853 [2024-11-21 00:18:57.208886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.208949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.853 [2024-11-21 00:18:57.208957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:06.853 [2024-11-21 00:18:57.208964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:06.853 [2024-11-21 00:18:57.208970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.210944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.853 [2024-11-21 00:18:57.211102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:06.853 [2024-11-21 00:18:57.211115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.956 ms 00:30:06.853 [2024-11-21 00:18:57.211122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.212672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.853 [2024-11-21 00:18:57.212698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:06.853 [2024-11-21 00:18:57.212706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.524 ms 00:30:06.853 [2024-11-21 00:18:57.212712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.214484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.853 [2024-11-21 00:18:57.214508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:06.853 [2024-11-21 00:18:57.214515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.739 ms 00:30:06.853 [2024-11-21 00:18:57.214521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.216238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.853 [2024-11-21 00:18:57.216272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:06.853 [2024-11-21 00:18:57.216280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.667 ms 00:30:06.853 [2024-11-21 00:18:57.216285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.216319] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:06.853 [2024-11-21 00:18:57.216332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:06.853 [2024-11-21 00:18:57.216345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:06.853 [2024-11-21 00:18:57.216352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:06.853 [2024-11-21 00:18:57.216359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:06.853 [2024-11-21 00:18:57.216456] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:06.853 [2024-11-21 00:18:57.216462] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 69dda36b-7eda-4c64-a11f-835234a8bb0d 00:30:06.853 [2024-11-21 00:18:57.216468] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:06.853 [2024-11-21 00:18:57.216474] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:06.853 [2024-11-21 00:18:57.216480] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:06.853 [2024-11-21 00:18:57.216487] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:06.853 [2024-11-21 00:18:57.216493] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:06.853 [2024-11-21 00:18:57.216499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:06.853 [2024-11-21 00:18:57.216504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:06.853 [2024-11-21 00:18:57.216510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:06.853 [2024-11-21 00:18:57.216516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:06.853 [2024-11-21 00:18:57.216522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.853 [2024-11-21 00:18:57.216528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:06.853 [2024-11-21 00:18:57.216534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:30:06.853 [2024-11-21 00:18:57.216546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.218223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.853 [2024-11-21 00:18:57.218248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:06.853 [2024-11-21 00:18:57.218256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.663 ms 00:30:06.853 [2024-11-21 00:18:57.218262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.218367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.853 [2024-11-21 00:18:57.218375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:06.853 [2024-11-21 00:18:57.218386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.089 ms 00:30:06.853 [2024-11-21 00:18:57.218392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.224357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:06.853 [2024-11-21 00:18:57.224474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:06.853 [2024-11-21 00:18:57.224486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:06.853 [2024-11-21 00:18:57.224492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.224517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:06.853 [2024-11-21 00:18:57.224525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:06.853 [2024-11-21 00:18:57.224536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:06.853 [2024-11-21 00:18:57.224542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.224587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:06.853 [2024-11-21 00:18:57.224596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:06.853 [2024-11-21 00:18:57.224602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:06.853 [2024-11-21 00:18:57.224608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.224624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:06.853 [2024-11-21 00:18:57.224631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:06.853 [2024-11-21 00:18:57.224637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:06.853 [2024-11-21 00:18:57.224646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.235338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:06.853 [2024-11-21 00:18:57.235462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:06.853 [2024-11-21 00:18:57.235474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:06.853 [2024-11-21 00:18:57.235480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.243916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:06.853 [2024-11-21 00:18:57.243946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:06.853 [2024-11-21 00:18:57.243958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:06.853 [2024-11-21 00:18:57.243966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.244027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:06.853 [2024-11-21 00:18:57.244035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:06.853 [2024-11-21 00:18:57.244042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:06.853 [2024-11-21 00:18:57.244049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.853 [2024-11-21 00:18:57.244078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:06.853 [2024-11-21 00:18:57.244086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:06.854 [2024-11-21 00:18:57.244097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:06.854 [2024-11-21 00:18:57.244104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.854 [2024-11-21 00:18:57.244168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:06.854 [2024-11-21 00:18:57.244176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:06.854 [2024-11-21 00:18:57.244183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:06.854 [2024-11-21 00:18:57.244189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.854 [2024-11-21 00:18:57.244216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:06.854 [2024-11-21 00:18:57.244224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:06.854 [2024-11-21 00:18:57.244231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:06.854 [2024-11-21 00:18:57.244237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.854 [2024-11-21 00:18:57.244286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:06.854 [2024-11-21 00:18:57.244309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:06.854 [2024-11-21 00:18:57.244316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:06.854 [2024-11-21 00:18:57.244323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.854 [2024-11-21 00:18:57.244363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:06.854 [2024-11-21 00:18:57.244372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:06.854 [2024-11-21 00:18:57.244380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:06.854 [2024-11-21 00:18:57.244386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.854 [2024-11-21 00:18:57.244501] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 42.759 ms, result 0 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:07.114 Remove shared memory files 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid93428 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:07.114 ************************************ 00:30:07.114 END TEST ftl_upgrade_shutdown 00:30:07.114 ************************************ 00:30:07.114 00:30:07.114 real 1m18.037s 00:30:07.114 user 1m41.134s 00:30:07.114 sys 0m20.883s 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:07.114 00:18:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:07.114 00:18:57 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:07.114 00:18:57 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:07.114 00:18:57 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:30:07.114 00:18:57 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:07.114 00:18:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:07.114 ************************************ 00:30:07.114 START TEST ftl_restore_fast 00:30:07.114 ************************************ 00:30:07.114 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:07.376 * Looking for test storage... 00:30:07.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:07.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.376 --rc genhtml_branch_coverage=1 00:30:07.376 --rc genhtml_function_coverage=1 00:30:07.376 --rc genhtml_legend=1 00:30:07.376 --rc geninfo_all_blocks=1 00:30:07.376 --rc geninfo_unexecuted_blocks=1 00:30:07.376 00:30:07.376 ' 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:07.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.376 --rc genhtml_branch_coverage=1 00:30:07.376 --rc genhtml_function_coverage=1 00:30:07.376 --rc genhtml_legend=1 00:30:07.376 --rc geninfo_all_blocks=1 00:30:07.376 --rc geninfo_unexecuted_blocks=1 00:30:07.376 00:30:07.376 ' 00:30:07.376 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:07.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.377 --rc genhtml_branch_coverage=1 00:30:07.377 --rc genhtml_function_coverage=1 00:30:07.377 --rc genhtml_legend=1 00:30:07.377 --rc geninfo_all_blocks=1 00:30:07.377 --rc geninfo_unexecuted_blocks=1 00:30:07.377 00:30:07.377 ' 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:07.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.377 --rc genhtml_branch_coverage=1 00:30:07.377 --rc genhtml_function_coverage=1 00:30:07.377 --rc genhtml_legend=1 00:30:07.377 --rc geninfo_all_blocks=1 00:30:07.377 --rc geninfo_unexecuted_blocks=1 00:30:07.377 00:30:07.377 ' 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:07.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.pEmqBAn3ps 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=93879 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 93879 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 93879 ']' 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:07.377 00:18:57 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:07.377 [2024-11-21 00:18:57.767212] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:30:07.377 [2024-11-21 00:18:57.767353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93879 ] 00:30:07.636 [2024-11-21 00:18:57.900315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.636 [2024-11-21 00:18:57.953134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.203 00:18:58 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:08.203 00:18:58 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:30:08.203 00:18:58 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:08.203 00:18:58 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:08.203 00:18:58 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:08.203 00:18:58 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:08.203 00:18:58 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:08.203 00:18:58 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:08.461 00:18:58 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:08.461 00:18:58 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:08.461 00:18:58 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:08.461 00:18:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:30:08.461 00:18:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:08.461 00:18:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:30:08.461 00:18:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:30:08.461 00:18:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:08.720 00:18:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:08.720 { 00:30:08.720 "name": "nvme0n1", 00:30:08.720 "aliases": [ 00:30:08.720 "64609056-8568-4269-887e-4982d75bb8fa" 00:30:08.720 ], 00:30:08.720 "product_name": "NVMe disk", 00:30:08.720 "block_size": 4096, 00:30:08.720 "num_blocks": 1310720, 00:30:08.720 "uuid": "64609056-8568-4269-887e-4982d75bb8fa", 00:30:08.720 "numa_id": -1, 00:30:08.720 "assigned_rate_limits": { 00:30:08.720 "rw_ios_per_sec": 0, 00:30:08.720 "rw_mbytes_per_sec": 0, 00:30:08.720 "r_mbytes_per_sec": 0, 00:30:08.720 "w_mbytes_per_sec": 0 00:30:08.720 }, 00:30:08.720 "claimed": true, 00:30:08.720 "claim_type": "read_many_write_one", 00:30:08.720 "zoned": false, 00:30:08.720 "supported_io_types": { 00:30:08.720 "read": true, 00:30:08.720 "write": true, 00:30:08.720 "unmap": true, 00:30:08.720 "flush": true, 00:30:08.720 "reset": true, 00:30:08.720 "nvme_admin": true, 00:30:08.720 "nvme_io": true, 00:30:08.720 "nvme_io_md": false, 00:30:08.720 "write_zeroes": true, 00:30:08.720 "zcopy": false, 00:30:08.720 "get_zone_info": false, 00:30:08.720 "zone_management": false, 00:30:08.720 "zone_append": false, 00:30:08.720 "compare": true, 00:30:08.720 "compare_and_write": false, 00:30:08.720 "abort": true, 00:30:08.720 "seek_hole": false, 00:30:08.720 "seek_data": false, 00:30:08.720 "copy": true, 00:30:08.720 "nvme_iov_md": false 00:30:08.720 }, 00:30:08.720 "driver_specific": { 00:30:08.720 "nvme": [ 00:30:08.720 { 00:30:08.720 "pci_address": "0000:00:11.0", 00:30:08.720 "trid": { 00:30:08.720 "trtype": "PCIe", 00:30:08.720 "traddr": "0000:00:11.0" 00:30:08.720 }, 00:30:08.720 "ctrlr_data": { 00:30:08.720 "cntlid": 0, 00:30:08.720 "vendor_id": "0x1b36", 00:30:08.720 "model_number": "QEMU NVMe Ctrl", 00:30:08.720 "serial_number": "12341", 00:30:08.720 "firmware_revision": "8.0.0", 00:30:08.720 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:08.720 "oacs": { 00:30:08.720 "security": 0, 00:30:08.720 "format": 1, 00:30:08.720 "firmware": 0, 00:30:08.720 "ns_manage": 1 00:30:08.720 }, 00:30:08.720 "multi_ctrlr": false, 00:30:08.720 "ana_reporting": false 00:30:08.720 }, 00:30:08.720 "vs": { 00:30:08.720 "nvme_version": "1.4" 00:30:08.720 }, 00:30:08.720 "ns_data": { 00:30:08.720 "id": 1, 00:30:08.720 "can_share": false 00:30:08.720 } 00:30:08.720 } 00:30:08.720 ], 00:30:08.720 "mp_policy": "active_passive" 00:30:08.720 } 00:30:08.720 } 00:30:08.720 ]' 00:30:08.720 00:18:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:08.720 00:18:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:30:08.720 00:18:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:08.720 00:18:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:30:08.720 00:18:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:30:08.720 00:18:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:30:08.720 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:08.720 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:08.720 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:08.720 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:08.720 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:08.977 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=f6e33cdf-18ad-44d4-94b5-72a061d09abd 00:30:08.977 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:08.977 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f6e33cdf-18ad-44d4-94b5-72a061d09abd 00:30:09.234 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:09.493 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=bd2048df-753e-42e1-8c7d-6b655eeb3c29 00:30:09.493 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bd2048df-753e-42e1-8c7d-6b655eeb3c29 00:30:09.751 00:18:59 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=51ee95b3-0c30-49ac-a47d-7aca594411d4 00:30:09.751 00:18:59 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:09.751 00:18:59 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 51ee95b3-0c30-49ac-a47d-7aca594411d4 00:30:09.751 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:09.751 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:09.751 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=51ee95b3-0c30-49ac-a47d-7aca594411d4 00:30:09.751 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:09.751 00:18:59 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 51ee95b3-0c30-49ac-a47d-7aca594411d4 00:30:09.751 00:18:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=51ee95b3-0c30-49ac-a47d-7aca594411d4 00:30:09.751 00:18:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:09.751 00:18:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:30:09.751 00:18:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:30:09.751 00:18:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51ee95b3-0c30-49ac-a47d-7aca594411d4 00:30:09.751 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:09.751 { 00:30:09.751 "name": "51ee95b3-0c30-49ac-a47d-7aca594411d4", 00:30:09.751 "aliases": [ 00:30:09.751 "lvs/nvme0n1p0" 00:30:09.751 ], 00:30:09.751 "product_name": "Logical Volume", 00:30:09.751 "block_size": 4096, 00:30:09.751 "num_blocks": 26476544, 00:30:09.751 "uuid": "51ee95b3-0c30-49ac-a47d-7aca594411d4", 00:30:09.751 "assigned_rate_limits": { 00:30:09.751 "rw_ios_per_sec": 0, 00:30:09.751 "rw_mbytes_per_sec": 0, 00:30:09.751 "r_mbytes_per_sec": 0, 00:30:09.751 "w_mbytes_per_sec": 0 00:30:09.752 }, 00:30:09.752 "claimed": false, 00:30:09.752 "zoned": false, 00:30:09.752 "supported_io_types": { 00:30:09.752 "read": true, 00:30:09.752 "write": true, 00:30:09.752 "unmap": true, 00:30:09.752 "flush": false, 00:30:09.752 "reset": true, 00:30:09.752 "nvme_admin": false, 00:30:09.752 "nvme_io": false, 00:30:09.752 "nvme_io_md": false, 00:30:09.752 "write_zeroes": true, 00:30:09.752 "zcopy": false, 00:30:09.752 "get_zone_info": false, 00:30:09.752 "zone_management": false, 00:30:09.752 "zone_append": false, 00:30:09.752 "compare": false, 00:30:09.752 "compare_and_write": false, 00:30:09.752 "abort": false, 00:30:09.752 "seek_hole": true, 00:30:09.752 "seek_data": true, 00:30:09.752 "copy": false, 00:30:09.752 "nvme_iov_md": false 00:30:09.752 }, 00:30:09.752 "driver_specific": { 00:30:09.752 "lvol": { 00:30:09.752 "lvol_store_uuid": "bd2048df-753e-42e1-8c7d-6b655eeb3c29", 00:30:09.752 "base_bdev": "nvme0n1", 00:30:09.752 "thin_provision": true, 00:30:09.752 "num_allocated_clusters": 0, 00:30:09.752 "snapshot": false, 00:30:09.752 "clone": false, 00:30:09.752 "esnap_clone": false 00:30:09.752 } 00:30:09.752 } 00:30:09.752 } 00:30:09.752 ]' 00:30:09.752 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:10.010 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:30:10.010 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:10.010 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:10.010 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:10.010 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:30:10.010 00:19:00 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:30:10.010 00:19:00 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:30:10.010 00:19:00 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:10.268 00:19:00 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:10.268 00:19:00 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:10.268 00:19:00 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 51ee95b3-0c30-49ac-a47d-7aca594411d4 00:30:10.268 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=51ee95b3-0c30-49ac-a47d-7aca594411d4 00:30:10.268 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:10.268 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:30:10.268 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:30:10.268 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51ee95b3-0c30-49ac-a47d-7aca594411d4 00:30:10.268 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:10.268 { 00:30:10.268 "name": "51ee95b3-0c30-49ac-a47d-7aca594411d4", 00:30:10.268 "aliases": [ 00:30:10.268 "lvs/nvme0n1p0" 00:30:10.268 ], 00:30:10.268 "product_name": "Logical Volume", 00:30:10.268 "block_size": 4096, 00:30:10.268 "num_blocks": 26476544, 00:30:10.269 "uuid": "51ee95b3-0c30-49ac-a47d-7aca594411d4", 00:30:10.269 "assigned_rate_limits": { 00:30:10.269 "rw_ios_per_sec": 0, 00:30:10.269 "rw_mbytes_per_sec": 0, 00:30:10.269 "r_mbytes_per_sec": 0, 00:30:10.269 "w_mbytes_per_sec": 0 00:30:10.269 }, 00:30:10.269 "claimed": false, 00:30:10.269 "zoned": false, 00:30:10.269 "supported_io_types": { 00:30:10.269 "read": true, 00:30:10.269 "write": true, 00:30:10.269 "unmap": true, 00:30:10.269 "flush": false, 00:30:10.269 "reset": true, 00:30:10.269 "nvme_admin": false, 00:30:10.269 "nvme_io": false, 00:30:10.269 "nvme_io_md": false, 00:30:10.269 "write_zeroes": true, 00:30:10.269 "zcopy": false, 00:30:10.269 "get_zone_info": false, 00:30:10.269 "zone_management": false, 00:30:10.269 "zone_append": false, 00:30:10.269 "compare": false, 00:30:10.269 "compare_and_write": false, 00:30:10.269 "abort": false, 00:30:10.269 "seek_hole": true, 00:30:10.269 "seek_data": true, 00:30:10.269 "copy": false, 00:30:10.269 "nvme_iov_md": false 00:30:10.269 }, 00:30:10.269 "driver_specific": { 00:30:10.269 "lvol": { 00:30:10.269 "lvol_store_uuid": "bd2048df-753e-42e1-8c7d-6b655eeb3c29", 00:30:10.269 "base_bdev": "nvme0n1", 00:30:10.269 "thin_provision": true, 00:30:10.269 "num_allocated_clusters": 0, 00:30:10.269 "snapshot": false, 00:30:10.269 "clone": false, 00:30:10.269 "esnap_clone": false 00:30:10.269 } 00:30:10.269 } 00:30:10.269 } 00:30:10.269 ]' 00:30:10.269 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 51ee95b3-0c30-49ac-a47d-7aca594411d4 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=51ee95b3-0c30-49ac-a47d-7aca594411d4 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:30:10.528 00:19:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51ee95b3-0c30-49ac-a47d-7aca594411d4 00:30:10.787 00:19:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:10.787 { 00:30:10.787 "name": "51ee95b3-0c30-49ac-a47d-7aca594411d4", 00:30:10.787 "aliases": [ 00:30:10.787 "lvs/nvme0n1p0" 00:30:10.787 ], 00:30:10.787 "product_name": "Logical Volume", 00:30:10.787 "block_size": 4096, 00:30:10.787 "num_blocks": 26476544, 00:30:10.787 "uuid": "51ee95b3-0c30-49ac-a47d-7aca594411d4", 00:30:10.787 "assigned_rate_limits": { 00:30:10.787 "rw_ios_per_sec": 0, 00:30:10.787 "rw_mbytes_per_sec": 0, 00:30:10.787 "r_mbytes_per_sec": 0, 00:30:10.787 "w_mbytes_per_sec": 0 00:30:10.787 }, 00:30:10.787 "claimed": false, 00:30:10.787 "zoned": false, 00:30:10.787 "supported_io_types": { 00:30:10.787 "read": true, 00:30:10.787 "write": true, 00:30:10.787 "unmap": true, 00:30:10.787 "flush": false, 00:30:10.787 "reset": true, 00:30:10.787 "nvme_admin": false, 00:30:10.787 "nvme_io": false, 00:30:10.787 "nvme_io_md": false, 00:30:10.787 "write_zeroes": true, 00:30:10.787 "zcopy": false, 00:30:10.787 "get_zone_info": false, 00:30:10.787 "zone_management": false, 00:30:10.787 "zone_append": false, 00:30:10.787 "compare": false, 00:30:10.787 "compare_and_write": false, 00:30:10.787 "abort": false, 00:30:10.787 "seek_hole": true, 00:30:10.787 "seek_data": true, 00:30:10.787 "copy": false, 00:30:10.787 "nvme_iov_md": false 00:30:10.787 }, 00:30:10.787 "driver_specific": { 00:30:10.787 "lvol": { 00:30:10.787 "lvol_store_uuid": "bd2048df-753e-42e1-8c7d-6b655eeb3c29", 00:30:10.787 "base_bdev": "nvme0n1", 00:30:10.787 "thin_provision": true, 00:30:10.787 "num_allocated_clusters": 0, 00:30:10.787 "snapshot": false, 00:30:10.787 "clone": false, 00:30:10.787 "esnap_clone": false 00:30:10.787 } 00:30:10.787 } 00:30:10.787 } 00:30:10.787 ]' 00:30:10.787 00:19:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:10.787 00:19:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:30:10.787 00:19:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:11.047 00:19:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:11.047 00:19:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:11.047 00:19:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:30:11.047 00:19:01 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:11.047 00:19:01 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 51ee95b3-0c30-49ac-a47d-7aca594411d4 --l2p_dram_limit 10' 00:30:11.047 00:19:01 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:11.047 00:19:01 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:11.047 00:19:01 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:11.047 00:19:01 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:30:11.047 00:19:01 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:30:11.047 00:19:01 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 51ee95b3-0c30-49ac-a47d-7aca594411d4 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:30:11.047 [2024-11-21 00:19:01.390568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.047 [2024-11-21 00:19:01.390730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:11.047 [2024-11-21 00:19:01.390746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:11.047 [2024-11-21 00:19:01.390758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.047 [2024-11-21 00:19:01.390815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.047 [2024-11-21 00:19:01.390827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:11.047 [2024-11-21 00:19:01.390835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:11.047 [2024-11-21 00:19:01.390845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.047 [2024-11-21 00:19:01.390867] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:11.047 [2024-11-21 00:19:01.391076] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:11.047 [2024-11-21 00:19:01.391088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.047 [2024-11-21 00:19:01.391097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:11.047 [2024-11-21 00:19:01.391107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:30:11.047 [2024-11-21 00:19:01.391115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.047 [2024-11-21 00:19:01.391141] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2ebebdb1-f50a-4a1e-8f04-426f1bc9f828 00:30:11.047 [2024-11-21 00:19:01.392476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.047 [2024-11-21 00:19:01.392503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:11.047 [2024-11-21 00:19:01.392513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:11.048 [2024-11-21 00:19:01.392520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.048 [2024-11-21 00:19:01.399402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.048 [2024-11-21 00:19:01.399426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:11.048 [2024-11-21 00:19:01.399440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.833 ms 00:30:11.048 [2024-11-21 00:19:01.399446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.048 [2024-11-21 00:19:01.399543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.048 [2024-11-21 00:19:01.399551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:11.048 [2024-11-21 00:19:01.399560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:11.048 [2024-11-21 00:19:01.399567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.048 [2024-11-21 00:19:01.399611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.048 [2024-11-21 00:19:01.399619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:11.048 [2024-11-21 00:19:01.399627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:11.048 [2024-11-21 00:19:01.399632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.048 [2024-11-21 00:19:01.399652] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:11.048 [2024-11-21 00:19:01.401331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.048 [2024-11-21 00:19:01.401444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:11.048 [2024-11-21 00:19:01.401459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:30:11.048 [2024-11-21 00:19:01.401469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.048 [2024-11-21 00:19:01.401497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.048 [2024-11-21 00:19:01.401505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:11.048 [2024-11-21 00:19:01.401512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:11.048 [2024-11-21 00:19:01.401522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.048 [2024-11-21 00:19:01.401538] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:11.048 [2024-11-21 00:19:01.401650] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:11.048 [2024-11-21 00:19:01.401661] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:11.048 [2024-11-21 00:19:01.401677] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:11.048 [2024-11-21 00:19:01.401685] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:11.048 [2024-11-21 00:19:01.401694] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:11.048 [2024-11-21 00:19:01.401701] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:11.048 [2024-11-21 00:19:01.401712] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:11.048 [2024-11-21 00:19:01.401717] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:11.048 [2024-11-21 00:19:01.401725] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:11.048 [2024-11-21 00:19:01.401733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.048 [2024-11-21 00:19:01.401741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:11.048 [2024-11-21 00:19:01.401747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:30:11.048 [2024-11-21 00:19:01.401755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.048 [2024-11-21 00:19:01.401818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.048 [2024-11-21 00:19:01.401828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:11.048 [2024-11-21 00:19:01.401834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:11.048 [2024-11-21 00:19:01.401841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.048 [2024-11-21 00:19:01.401915] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:11.048 [2024-11-21 00:19:01.401925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:11.048 [2024-11-21 00:19:01.401932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:11.048 [2024-11-21 00:19:01.401940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.048 [2024-11-21 00:19:01.401946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:11.048 [2024-11-21 00:19:01.401953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:11.048 [2024-11-21 00:19:01.401959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:11.048 [2024-11-21 00:19:01.401967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:11.048 [2024-11-21 00:19:01.401973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:11.048 [2024-11-21 00:19:01.401979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:11.048 [2024-11-21 00:19:01.401984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:11.048 [2024-11-21 00:19:01.401991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:11.048 [2024-11-21 00:19:01.401996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:11.048 [2024-11-21 00:19:01.402006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:11.048 [2024-11-21 00:19:01.402012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:11.048 [2024-11-21 00:19:01.402018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.048 [2024-11-21 00:19:01.402023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:11.048 [2024-11-21 00:19:01.402032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:11.048 [2024-11-21 00:19:01.402038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.048 [2024-11-21 00:19:01.402047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:11.048 [2024-11-21 00:19:01.402053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:11.048 [2024-11-21 00:19:01.402061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.048 [2024-11-21 00:19:01.402066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:11.048 [2024-11-21 00:19:01.402074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:11.048 [2024-11-21 00:19:01.402081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.048 [2024-11-21 00:19:01.402088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:11.048 [2024-11-21 00:19:01.402094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:11.048 [2024-11-21 00:19:01.402101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.048 [2024-11-21 00:19:01.402107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:11.048 [2024-11-21 00:19:01.402117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:11.048 [2024-11-21 00:19:01.402123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.048 [2024-11-21 00:19:01.402130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:11.048 [2024-11-21 00:19:01.402136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:11.048 [2024-11-21 00:19:01.402144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:11.048 [2024-11-21 00:19:01.402150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:11.048 [2024-11-21 00:19:01.402157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:11.048 [2024-11-21 00:19:01.402164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:11.048 [2024-11-21 00:19:01.402171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:11.048 [2024-11-21 00:19:01.402177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:11.048 [2024-11-21 00:19:01.402184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.048 [2024-11-21 00:19:01.402190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:11.048 [2024-11-21 00:19:01.402197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:11.048 [2024-11-21 00:19:01.402203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.048 [2024-11-21 00:19:01.402210] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:11.048 [2024-11-21 00:19:01.402221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:11.048 [2024-11-21 00:19:01.402231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:11.048 [2024-11-21 00:19:01.402238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.048 [2024-11-21 00:19:01.402247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:11.048 [2024-11-21 00:19:01.402253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:11.048 [2024-11-21 00:19:01.402262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:11.048 [2024-11-21 00:19:01.402269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:11.048 [2024-11-21 00:19:01.402277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:11.048 [2024-11-21 00:19:01.402283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:11.048 [2024-11-21 00:19:01.402315] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:11.048 [2024-11-21 00:19:01.402324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.048 [2024-11-21 00:19:01.402334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:11.048 [2024-11-21 00:19:01.402341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:11.048 [2024-11-21 00:19:01.402350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:11.048 [2024-11-21 00:19:01.402356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:11.048 [2024-11-21 00:19:01.402364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:11.048 [2024-11-21 00:19:01.402371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:11.048 [2024-11-21 00:19:01.402382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:11.048 [2024-11-21 00:19:01.402388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:11.049 [2024-11-21 00:19:01.402396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:11.049 [2024-11-21 00:19:01.402402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:11.049 [2024-11-21 00:19:01.402410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:11.049 [2024-11-21 00:19:01.402415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:11.049 [2024-11-21 00:19:01.402422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:11.049 [2024-11-21 00:19:01.402428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:11.049 [2024-11-21 00:19:01.402436] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:11.049 [2024-11-21 00:19:01.402444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.049 [2024-11-21 00:19:01.402452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:11.049 [2024-11-21 00:19:01.402457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:11.049 [2024-11-21 00:19:01.402464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:11.049 [2024-11-21 00:19:01.402470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:11.049 [2024-11-21 00:19:01.402477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.049 [2024-11-21 00:19:01.402483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:11.049 [2024-11-21 00:19:01.402492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:30:11.049 [2024-11-21 00:19:01.402498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.049 [2024-11-21 00:19:01.402530] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:11.049 [2024-11-21 00:19:01.402538] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:15.240 [2024-11-21 00:19:05.124903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.240 [2024-11-21 00:19:05.124972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:15.240 [2024-11-21 00:19:05.124989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3722.360 ms 00:30:15.240 [2024-11-21 00:19:05.124996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.240 [2024-11-21 00:19:05.135774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.240 [2024-11-21 00:19:05.135814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:15.240 [2024-11-21 00:19:05.135826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.695 ms 00:30:15.240 [2024-11-21 00:19:05.135833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.135906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.135913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:15.241 [2024-11-21 00:19:05.135926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:15.241 [2024-11-21 00:19:05.135932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.145171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.145348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:15.241 [2024-11-21 00:19:05.145365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.186 ms 00:30:15.241 [2024-11-21 00:19:05.145372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.145400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.145407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:15.241 [2024-11-21 00:19:05.145418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:15.241 [2024-11-21 00:19:05.145424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.145821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.145838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:15.241 [2024-11-21 00:19:05.145847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:30:15.241 [2024-11-21 00:19:05.145853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.145950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.145958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:15.241 [2024-11-21 00:19:05.145967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:30:15.241 [2024-11-21 00:19:05.145977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.161908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.161946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:15.241 [2024-11-21 00:19:05.161961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.899 ms 00:30:15.241 [2024-11-21 00:19:05.161969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.171229] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:15.241 [2024-11-21 00:19:05.174541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.174573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:15.241 [2024-11-21 00:19:05.174584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.480 ms 00:30:15.241 [2024-11-21 00:19:05.174594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.239965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.239999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:15.241 [2024-11-21 00:19:05.240008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.346 ms 00:30:15.241 [2024-11-21 00:19:05.240018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.240168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.240178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:15.241 [2024-11-21 00:19:05.240190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:30:15.241 [2024-11-21 00:19:05.240198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.243916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.243954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:15.241 [2024-11-21 00:19:05.243969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.703 ms 00:30:15.241 [2024-11-21 00:19:05.243977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.247205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.247237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:15.241 [2024-11-21 00:19:05.247246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.194 ms 00:30:15.241 [2024-11-21 00:19:05.247253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.247497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.247512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:15.241 [2024-11-21 00:19:05.247519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:30:15.241 [2024-11-21 00:19:05.247532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.279659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.279692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:15.241 [2024-11-21 00:19:05.279701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.113 ms 00:30:15.241 [2024-11-21 00:19:05.279710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.284614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.284646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:15.241 [2024-11-21 00:19:05.284654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.875 ms 00:30:15.241 [2024-11-21 00:19:05.284662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.288083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.288113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:15.241 [2024-11-21 00:19:05.288121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.404 ms 00:30:15.241 [2024-11-21 00:19:05.288128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.292608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.292644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:15.241 [2024-11-21 00:19:05.292653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.463 ms 00:30:15.241 [2024-11-21 00:19:05.292663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.292686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.292695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:15.241 [2024-11-21 00:19:05.292702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:15.241 [2024-11-21 00:19:05.292709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.292764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.241 [2024-11-21 00:19:05.292777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:15.241 [2024-11-21 00:19:05.292788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:15.241 [2024-11-21 00:19:05.292796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.241 [2024-11-21 00:19:05.293649] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3902.670 ms, result 0 00:30:15.241 { 00:30:15.241 "name": "ftl0", 00:30:15.241 "uuid": "2ebebdb1-f50a-4a1e-8f04-426f1bc9f828" 00:30:15.241 } 00:30:15.241 00:19:05 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:15.241 00:19:05 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:15.241 00:19:05 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:30:15.241 00:19:05 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:15.501 [2024-11-21 00:19:05.702343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.501 [2024-11-21 00:19:05.702470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:15.501 [2024-11-21 00:19:05.702519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:15.501 [2024-11-21 00:19:05.702538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.501 [2024-11-21 00:19:05.702572] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:15.501 [2024-11-21 00:19:05.703135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.501 [2024-11-21 00:19:05.703219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:15.501 [2024-11-21 00:19:05.703258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:30:15.501 [2024-11-21 00:19:05.703278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.501 [2024-11-21 00:19:05.703496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.501 [2024-11-21 00:19:05.703519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:15.501 [2024-11-21 00:19:05.703535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:30:15.501 [2024-11-21 00:19:05.703553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.501 [2024-11-21 00:19:05.705991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.501 [2024-11-21 00:19:05.706061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:15.501 [2024-11-21 00:19:05.706100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.417 ms 00:30:15.501 [2024-11-21 00:19:05.706118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.501 [2024-11-21 00:19:05.710873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.501 [2024-11-21 00:19:05.710956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:15.501 [2024-11-21 00:19:05.710996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.731 ms 00:30:15.501 [2024-11-21 00:19:05.711019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.501 [2024-11-21 00:19:05.713142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.501 [2024-11-21 00:19:05.713239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:15.501 [2024-11-21 00:19:05.713336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.057 ms 00:30:15.501 [2024-11-21 00:19:05.713358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.501 [2024-11-21 00:19:05.718541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.501 [2024-11-21 00:19:05.718633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:15.501 [2024-11-21 00:19:05.718675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.146 ms 00:30:15.501 [2024-11-21 00:19:05.718697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.501 [2024-11-21 00:19:05.719004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.501 [2024-11-21 00:19:05.719129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:15.501 [2024-11-21 00:19:05.719177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:15.501 [2024-11-21 00:19:05.719221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.501 [2024-11-21 00:19:05.721513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.501 [2024-11-21 00:19:05.721606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:15.501 [2024-11-21 00:19:05.721647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:30:15.501 [2024-11-21 00:19:05.721666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.501 [2024-11-21 00:19:05.723582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.501 [2024-11-21 00:19:05.723670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:15.501 [2024-11-21 00:19:05.723709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.844 ms 00:30:15.501 [2024-11-21 00:19:05.723727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.501 [2024-11-21 00:19:05.725304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.501 [2024-11-21 00:19:05.725392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:15.501 [2024-11-21 00:19:05.725431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.535 ms 00:30:15.501 [2024-11-21 00:19:05.725450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.501 [2024-11-21 00:19:05.727100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.501 [2024-11-21 00:19:05.727185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:15.501 [2024-11-21 00:19:05.727230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:30:15.501 [2024-11-21 00:19:05.727248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.502 [2024-11-21 00:19:05.727338] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:15.502 [2024-11-21 00:19:05.727357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:15.502 [2024-11-21 00:19:05.727957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:15.503 [2024-11-21 00:19:05.727963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:15.503 [2024-11-21 00:19:05.727970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:15.503 [2024-11-21 00:19:05.727976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:15.503 [2024-11-21 00:19:05.727983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:15.503 [2024-11-21 00:19:05.727989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:15.503 [2024-11-21 00:19:05.727996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:15.503 [2024-11-21 00:19:05.728003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:15.503 [2024-11-21 00:19:05.728010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:15.503 [2024-11-21 00:19:05.728016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:15.503 [2024-11-21 00:19:05.728023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:15.503 [2024-11-21 00:19:05.728029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:15.503 [2024-11-21 00:19:05.728044] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:15.503 [2024-11-21 00:19:05.728050] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ebebdb1-f50a-4a1e-8f04-426f1bc9f828 00:30:15.503 [2024-11-21 00:19:05.728057] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:15.503 [2024-11-21 00:19:05.728064] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:15.503 [2024-11-21 00:19:05.728071] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:15.503 [2024-11-21 00:19:05.728076] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:15.503 [2024-11-21 00:19:05.728084] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:15.503 [2024-11-21 00:19:05.728090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:15.503 [2024-11-21 00:19:05.728097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:15.503 [2024-11-21 00:19:05.728102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:15.503 [2024-11-21 00:19:05.728109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:15.503 [2024-11-21 00:19:05.728114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.503 [2024-11-21 00:19:05.728122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:15.503 [2024-11-21 00:19:05.728130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:30:15.503 [2024-11-21 00:19:05.728137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.729605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.503 [2024-11-21 00:19:05.729645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:15.503 [2024-11-21 00:19:05.729661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.444 ms 00:30:15.503 [2024-11-21 00:19:05.729679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.729758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.503 [2024-11-21 00:19:05.729777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:15.503 [2024-11-21 00:19:05.729793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:15.503 [2024-11-21 00:19:05.729950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.735975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.503 [2024-11-21 00:19:05.736071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:15.503 [2024-11-21 00:19:05.736116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.503 [2024-11-21 00:19:05.736136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.736193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.503 [2024-11-21 00:19:05.736211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:15.503 [2024-11-21 00:19:05.736227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.503 [2024-11-21 00:19:05.736252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.736341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.503 [2024-11-21 00:19:05.736438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:15.503 [2024-11-21 00:19:05.736456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.503 [2024-11-21 00:19:05.736474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.736510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.503 [2024-11-21 00:19:05.736533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:15.503 [2024-11-21 00:19:05.736550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.503 [2024-11-21 00:19:05.736566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.747121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.503 [2024-11-21 00:19:05.747239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:15.503 [2024-11-21 00:19:05.747279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.503 [2024-11-21 00:19:05.747309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.756328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.503 [2024-11-21 00:19:05.756443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:15.503 [2024-11-21 00:19:05.756484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.503 [2024-11-21 00:19:05.756507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.756583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.503 [2024-11-21 00:19:05.756611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:15.503 [2024-11-21 00:19:05.756628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.503 [2024-11-21 00:19:05.756645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.756688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.503 [2024-11-21 00:19:05.756741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:15.503 [2024-11-21 00:19:05.756760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.503 [2024-11-21 00:19:05.756779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.756853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.503 [2024-11-21 00:19:05.756874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:15.503 [2024-11-21 00:19:05.756890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.503 [2024-11-21 00:19:05.756909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.756978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.503 [2024-11-21 00:19:05.757001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:15.503 [2024-11-21 00:19:05.757017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.503 [2024-11-21 00:19:05.757039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.757086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.503 [2024-11-21 00:19:05.757267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:15.503 [2024-11-21 00:19:05.757286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.503 [2024-11-21 00:19:05.757353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.757432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.503 [2024-11-21 00:19:05.757463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:15.503 [2024-11-21 00:19:05.757481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.503 [2024-11-21 00:19:05.757500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.503 [2024-11-21 00:19:05.757707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.321 ms, result 0 00:30:15.503 true 00:30:15.503 00:19:05 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 93879 00:30:15.503 00:19:05 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93879 ']' 00:30:15.503 00:19:05 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93879 00:30:15.503 00:19:05 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:30:15.503 00:19:05 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:15.503 00:19:05 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93879 00:30:15.503 killing process with pid 93879 00:30:15.503 00:19:05 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:15.503 00:19:05 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:15.503 00:19:05 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93879' 00:30:15.503 00:19:05 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 93879 00:30:15.503 00:19:05 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 93879 00:30:20.787 00:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:24.994 262144+0 records in 00:30:24.994 262144+0 records out 00:30:24.994 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.78896 s, 283 MB/s 00:30:24.994 00:19:14 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:26.908 00:19:16 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:26.909 [2024-11-21 00:19:16.882662] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:30:26.909 [2024-11-21 00:19:16.882898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94088 ] 00:30:26.909 [2024-11-21 00:19:17.019207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.909 [2024-11-21 00:19:17.069968] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.909 [2024-11-21 00:19:17.184864] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:26.909 [2024-11-21 00:19:17.184940] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:27.172 [2024-11-21 00:19:17.342509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.172 [2024-11-21 00:19:17.342707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:27.172 [2024-11-21 00:19:17.342733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:27.172 [2024-11-21 00:19:17.342741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.172 [2024-11-21 00:19:17.342800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.172 [2024-11-21 00:19:17.342815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:27.172 [2024-11-21 00:19:17.342824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:27.172 [2024-11-21 00:19:17.342837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.172 [2024-11-21 00:19:17.342864] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:27.172 [2024-11-21 00:19:17.343108] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:27.172 [2024-11-21 00:19:17.343124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.172 [2024-11-21 00:19:17.343135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:27.172 [2024-11-21 00:19:17.343147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:30:27.172 [2024-11-21 00:19:17.343155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.172 [2024-11-21 00:19:17.344625] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:27.172 [2024-11-21 00:19:17.347797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.172 [2024-11-21 00:19:17.347833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:27.172 [2024-11-21 00:19:17.347849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.174 ms 00:30:27.172 [2024-11-21 00:19:17.347857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.172 [2024-11-21 00:19:17.347910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.172 [2024-11-21 00:19:17.347923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:27.172 [2024-11-21 00:19:17.347931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:27.172 [2024-11-21 00:19:17.347940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.172 [2024-11-21 00:19:17.354804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.172 [2024-11-21 00:19:17.354835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:27.172 [2024-11-21 00:19:17.354845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.820 ms 00:30:27.172 [2024-11-21 00:19:17.354854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.172 [2024-11-21 00:19:17.354943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.172 [2024-11-21 00:19:17.354956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:27.172 [2024-11-21 00:19:17.354964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:27.172 [2024-11-21 00:19:17.354971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.172 [2024-11-21 00:19:17.355007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.172 [2024-11-21 00:19:17.355021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:27.172 [2024-11-21 00:19:17.355029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:27.172 [2024-11-21 00:19:17.355037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.172 [2024-11-21 00:19:17.355060] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:27.172 [2024-11-21 00:19:17.356833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.172 [2024-11-21 00:19:17.356859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:27.172 [2024-11-21 00:19:17.356874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.778 ms 00:30:27.172 [2024-11-21 00:19:17.356882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.172 [2024-11-21 00:19:17.356912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.172 [2024-11-21 00:19:17.356920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:27.172 [2024-11-21 00:19:17.356928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:27.172 [2024-11-21 00:19:17.356936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.172 [2024-11-21 00:19:17.356962] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:27.172 [2024-11-21 00:19:17.356984] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:27.172 [2024-11-21 00:19:17.357032] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:27.172 [2024-11-21 00:19:17.357048] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:27.172 [2024-11-21 00:19:17.357156] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:27.172 [2024-11-21 00:19:17.357171] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:27.172 [2024-11-21 00:19:17.357182] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:27.172 [2024-11-21 00:19:17.357192] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:27.172 [2024-11-21 00:19:17.357202] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:27.172 [2024-11-21 00:19:17.357210] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:27.172 [2024-11-21 00:19:17.357217] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:27.172 [2024-11-21 00:19:17.357228] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:27.172 [2024-11-21 00:19:17.357236] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:27.172 [2024-11-21 00:19:17.357243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.172 [2024-11-21 00:19:17.357250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:27.172 [2024-11-21 00:19:17.357259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:30:27.172 [2024-11-21 00:19:17.357266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.172 [2024-11-21 00:19:17.357375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.172 [2024-11-21 00:19:17.357386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:27.172 [2024-11-21 00:19:17.357395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:30:27.172 [2024-11-21 00:19:17.357402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.172 [2024-11-21 00:19:17.357518] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:27.172 [2024-11-21 00:19:17.357530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:27.172 [2024-11-21 00:19:17.357539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:27.172 [2024-11-21 00:19:17.357548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.172 [2024-11-21 00:19:17.357560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:27.172 [2024-11-21 00:19:17.357568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:27.172 [2024-11-21 00:19:17.357577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:27.172 [2024-11-21 00:19:17.357585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:27.172 [2024-11-21 00:19:17.357593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:27.172 [2024-11-21 00:19:17.357601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:27.172 [2024-11-21 00:19:17.357608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:27.172 [2024-11-21 00:19:17.357615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:27.172 [2024-11-21 00:19:17.357628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:27.172 [2024-11-21 00:19:17.357639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:27.172 [2024-11-21 00:19:17.357647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:27.172 [2024-11-21 00:19:17.357660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.172 [2024-11-21 00:19:17.357672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:27.172 [2024-11-21 00:19:17.357680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:27.172 [2024-11-21 00:19:17.357693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.172 [2024-11-21 00:19:17.357700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:27.172 [2024-11-21 00:19:17.357708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:27.172 [2024-11-21 00:19:17.357715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:27.172 [2024-11-21 00:19:17.357722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:27.172 [2024-11-21 00:19:17.357730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:27.172 [2024-11-21 00:19:17.357744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:27.172 [2024-11-21 00:19:17.357752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:27.172 [2024-11-21 00:19:17.357760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:27.172 [2024-11-21 00:19:17.357772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:27.172 [2024-11-21 00:19:17.357783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:27.172 [2024-11-21 00:19:17.357795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:27.172 [2024-11-21 00:19:17.357807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:27.172 [2024-11-21 00:19:17.357815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:27.172 [2024-11-21 00:19:17.357827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:27.172 [2024-11-21 00:19:17.357837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:27.173 [2024-11-21 00:19:17.357849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:27.173 [2024-11-21 00:19:17.357860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:27.173 [2024-11-21 00:19:17.357866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:27.173 [2024-11-21 00:19:17.357873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:27.173 [2024-11-21 00:19:17.357886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:27.173 [2024-11-21 00:19:17.357892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.173 [2024-11-21 00:19:17.357898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:27.173 [2024-11-21 00:19:17.357905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:27.173 [2024-11-21 00:19:17.357911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.173 [2024-11-21 00:19:17.357918] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:27.173 [2024-11-21 00:19:17.357927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:27.173 [2024-11-21 00:19:17.357935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:27.173 [2024-11-21 00:19:17.357949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.173 [2024-11-21 00:19:17.357957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:27.173 [2024-11-21 00:19:17.357967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:27.173 [2024-11-21 00:19:17.357979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:27.173 [2024-11-21 00:19:17.357987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:27.173 [2024-11-21 00:19:17.357995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:27.173 [2024-11-21 00:19:17.358001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:27.173 [2024-11-21 00:19:17.358015] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:27.173 [2024-11-21 00:19:17.358024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:27.173 [2024-11-21 00:19:17.358041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:27.173 [2024-11-21 00:19:17.358049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:27.173 [2024-11-21 00:19:17.358057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:27.173 [2024-11-21 00:19:17.358070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:27.173 [2024-11-21 00:19:17.358077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:27.173 [2024-11-21 00:19:17.358091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:27.173 [2024-11-21 00:19:17.358099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:27.173 [2024-11-21 00:19:17.358107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:27.173 [2024-11-21 00:19:17.358114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:27.173 [2024-11-21 00:19:17.358131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:27.173 [2024-11-21 00:19:17.358142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:27.173 [2024-11-21 00:19:17.358150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:27.173 [2024-11-21 00:19:17.358158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:27.173 [2024-11-21 00:19:17.358166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:27.173 [2024-11-21 00:19:17.358173] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:27.173 [2024-11-21 00:19:17.358180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:27.173 [2024-11-21 00:19:17.358188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:27.173 [2024-11-21 00:19:17.358195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:27.173 [2024-11-21 00:19:17.358203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:27.173 [2024-11-21 00:19:17.358209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:27.173 [2024-11-21 00:19:17.358216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.358225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:27.173 [2024-11-21 00:19:17.358234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:30:27.173 [2024-11-21 00:19:17.358241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.382400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.382443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:27.173 [2024-11-21 00:19:17.382458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.109 ms 00:30:27.173 [2024-11-21 00:19:17.382467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.382562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.382571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:27.173 [2024-11-21 00:19:17.382580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:27.173 [2024-11-21 00:19:17.382589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.393695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.393891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:27.173 [2024-11-21 00:19:17.393910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.050 ms 00:30:27.173 [2024-11-21 00:19:17.393919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.393957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.393968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:27.173 [2024-11-21 00:19:17.393979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:27.173 [2024-11-21 00:19:17.393988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.394492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.394523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:27.173 [2024-11-21 00:19:17.394535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:30:27.173 [2024-11-21 00:19:17.394546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.394713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.394726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:27.173 [2024-11-21 00:19:17.394737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:30:27.173 [2024-11-21 00:19:17.394749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.400875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.400988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:27.173 [2024-11-21 00:19:17.401007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.101 ms 00:30:27.173 [2024-11-21 00:19:17.401015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.404261] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:27.173 [2024-11-21 00:19:17.404407] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:27.173 [2024-11-21 00:19:17.404426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.404435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:27.173 [2024-11-21 00:19:17.404444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.327 ms 00:30:27.173 [2024-11-21 00:19:17.404451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.419190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.419241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:27.173 [2024-11-21 00:19:17.419257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.705 ms 00:30:27.173 [2024-11-21 00:19:17.419267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.421505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.421534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:27.173 [2024-11-21 00:19:17.421543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:30:27.173 [2024-11-21 00:19:17.421550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.423149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.423264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:27.173 [2024-11-21 00:19:17.423279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.569 ms 00:30:27.173 [2024-11-21 00:19:17.423286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.423625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.423640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:27.173 [2024-11-21 00:19:17.423649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:30:27.173 [2024-11-21 00:19:17.423657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.442864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.173 [2024-11-21 00:19:17.442904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:27.173 [2024-11-21 00:19:17.442919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.192 ms 00:30:27.173 [2024-11-21 00:19:17.442927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.173 [2024-11-21 00:19:17.450761] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:27.173 [2024-11-21 00:19:17.453401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.174 [2024-11-21 00:19:17.453526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:27.174 [2024-11-21 00:19:17.453541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.438 ms 00:30:27.174 [2024-11-21 00:19:17.453561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.174 [2024-11-21 00:19:17.453627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.174 [2024-11-21 00:19:17.453637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:27.174 [2024-11-21 00:19:17.453646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:27.174 [2024-11-21 00:19:17.453654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.174 [2024-11-21 00:19:17.453722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.174 [2024-11-21 00:19:17.453732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:27.174 [2024-11-21 00:19:17.453742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:27.174 [2024-11-21 00:19:17.453754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.174 [2024-11-21 00:19:17.453781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.174 [2024-11-21 00:19:17.453798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:27.174 [2024-11-21 00:19:17.453806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:27.174 [2024-11-21 00:19:17.453813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.174 [2024-11-21 00:19:17.453847] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:27.174 [2024-11-21 00:19:17.453859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.174 [2024-11-21 00:19:17.453868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:27.174 [2024-11-21 00:19:17.453876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:27.174 [2024-11-21 00:19:17.453883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.174 [2024-11-21 00:19:17.460759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.174 [2024-11-21 00:19:17.461106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:27.174 [2024-11-21 00:19:17.461156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.853 ms 00:30:27.174 [2024-11-21 00:19:17.461200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.174 [2024-11-21 00:19:17.461433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.174 [2024-11-21 00:19:17.461467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:27.174 [2024-11-21 00:19:17.461493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:30:27.174 [2024-11-21 00:19:17.461515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.174 [2024-11-21 00:19:17.463915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 120.260 ms, result 0 00:30:28.114  [2024-11-21T00:19:19.915Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-21T00:19:20.489Z] Copying: 43/1024 [MB] (23 MBps) [2024-11-21T00:19:21.872Z] Copying: 57/1024 [MB] (13 MBps) [2024-11-21T00:19:22.813Z] Copying: 71/1024 [MB] (13 MBps) [2024-11-21T00:19:23.746Z] Copying: 89/1024 [MB] (18 MBps) [2024-11-21T00:19:24.679Z] Copying: 111/1024 [MB] (21 MBps) [2024-11-21T00:19:25.613Z] Copying: 135/1024 [MB] (23 MBps) [2024-11-21T00:19:26.549Z] Copying: 154/1024 [MB] (19 MBps) [2024-11-21T00:19:27.491Z] Copying: 174/1024 [MB] (19 MBps) [2024-11-21T00:19:28.863Z] Copying: 184/1024 [MB] (10 MBps) [2024-11-21T00:19:29.798Z] Copying: 196/1024 [MB] (11 MBps) [2024-11-21T00:19:30.732Z] Copying: 207/1024 [MB] (11 MBps) [2024-11-21T00:19:31.750Z] Copying: 218/1024 [MB] (11 MBps) [2024-11-21T00:19:32.684Z] Copying: 230/1024 [MB] (11 MBps) [2024-11-21T00:19:33.627Z] Copying: 241/1024 [MB] (11 MBps) [2024-11-21T00:19:34.562Z] Copying: 252/1024 [MB] (10 MBps) [2024-11-21T00:19:35.498Z] Copying: 263/1024 [MB] (11 MBps) [2024-11-21T00:19:36.873Z] Copying: 275/1024 [MB] (11 MBps) [2024-11-21T00:19:37.807Z] Copying: 286/1024 [MB] (11 MBps) [2024-11-21T00:19:38.741Z] Copying: 298/1024 [MB] (11 MBps) [2024-11-21T00:19:39.675Z] Copying: 310/1024 [MB] (11 MBps) [2024-11-21T00:19:40.609Z] Copying: 321/1024 [MB] (11 MBps) [2024-11-21T00:19:41.545Z] Copying: 333/1024 [MB] (11 MBps) [2024-11-21T00:19:42.481Z] Copying: 345/1024 [MB] (11 MBps) [2024-11-21T00:19:43.857Z] Copying: 356/1024 [MB] (11 MBps) [2024-11-21T00:19:44.794Z] Copying: 368/1024 [MB] (11 MBps) [2024-11-21T00:19:45.733Z] Copying: 379/1024 [MB] (11 MBps) [2024-11-21T00:19:46.669Z] Copying: 389/1024 [MB] (10 MBps) [2024-11-21T00:19:47.604Z] Copying: 401/1024 [MB] (11 MBps) [2024-11-21T00:19:48.538Z] Copying: 412/1024 [MB] (11 MBps) [2024-11-21T00:19:49.909Z] Copying: 426/1024 [MB] (13 MBps) [2024-11-21T00:19:50.843Z] Copying: 437/1024 [MB] (11 MBps) [2024-11-21T00:19:51.778Z] Copying: 451/1024 [MB] (13 MBps) [2024-11-21T00:19:52.744Z] Copying: 463/1024 [MB] (11 MBps) [2024-11-21T00:19:53.684Z] Copying: 474/1024 [MB] (11 MBps) [2024-11-21T00:19:54.619Z] Copying: 486/1024 [MB] (11 MBps) [2024-11-21T00:19:55.554Z] Copying: 499/1024 [MB] (13 MBps) [2024-11-21T00:19:56.490Z] Copying: 510/1024 [MB] (11 MBps) [2024-11-21T00:19:57.867Z] Copying: 522/1024 [MB] (11 MBps) [2024-11-21T00:19:58.802Z] Copying: 533/1024 [MB] (11 MBps) [2024-11-21T00:19:59.735Z] Copying: 544/1024 [MB] (11 MBps) [2024-11-21T00:20:00.669Z] Copying: 556/1024 [MB] (11 MBps) [2024-11-21T00:20:01.603Z] Copying: 567/1024 [MB] (11 MBps) [2024-11-21T00:20:02.539Z] Copying: 578/1024 [MB] (11 MBps) [2024-11-21T00:20:03.583Z] Copying: 590/1024 [MB] (11 MBps) [2024-11-21T00:20:04.533Z] Copying: 601/1024 [MB] (11 MBps) [2024-11-21T00:20:05.907Z] Copying: 613/1024 [MB] (11 MBps) [2024-11-21T00:20:06.842Z] Copying: 625/1024 [MB] (11 MBps) [2024-11-21T00:20:07.778Z] Copying: 637/1024 [MB] (11 MBps) [2024-11-21T00:20:08.713Z] Copying: 648/1024 [MB] (11 MBps) [2024-11-21T00:20:09.655Z] Copying: 660/1024 [MB] (11 MBps) [2024-11-21T00:20:10.592Z] Copying: 672/1024 [MB] (11 MBps) [2024-11-21T00:20:11.534Z] Copying: 683/1024 [MB] (11 MBps) [2024-11-21T00:20:12.917Z] Copying: 693/1024 [MB] (10 MBps) [2024-11-21T00:20:13.484Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-21T00:20:14.865Z] Copying: 715/1024 [MB] (10 MBps) [2024-11-21T00:20:15.800Z] Copying: 726/1024 [MB] (11 MBps) [2024-11-21T00:20:16.736Z] Copying: 737/1024 [MB] (10 MBps) [2024-11-21T00:20:17.670Z] Copying: 749/1024 [MB] (11 MBps) [2024-11-21T00:20:18.605Z] Copying: 760/1024 [MB] (11 MBps) [2024-11-21T00:20:19.538Z] Copying: 771/1024 [MB] (11 MBps) [2024-11-21T00:20:20.919Z] Copying: 783/1024 [MB] (11 MBps) [2024-11-21T00:20:21.486Z] Copying: 794/1024 [MB] (11 MBps) [2024-11-21T00:20:22.862Z] Copying: 804/1024 [MB] (10 MBps) [2024-11-21T00:20:23.797Z] Copying: 816/1024 [MB] (11 MBps) [2024-11-21T00:20:24.733Z] Copying: 827/1024 [MB] (11 MBps) [2024-11-21T00:20:25.668Z] Copying: 839/1024 [MB] (11 MBps) [2024-11-21T00:20:26.602Z] Copying: 850/1024 [MB] (11 MBps) [2024-11-21T00:20:27.539Z] Copying: 862/1024 [MB] (11 MBps) [2024-11-21T00:20:28.914Z] Copying: 873/1024 [MB] (11 MBps) [2024-11-21T00:20:29.482Z] Copying: 884/1024 [MB] (10 MBps) [2024-11-21T00:20:30.857Z] Copying: 895/1024 [MB] (11 MBps) [2024-11-21T00:20:31.791Z] Copying: 907/1024 [MB] (11 MBps) [2024-11-21T00:20:32.726Z] Copying: 918/1024 [MB] (10 MBps) [2024-11-21T00:20:33.667Z] Copying: 929/1024 [MB] (11 MBps) [2024-11-21T00:20:34.716Z] Copying: 940/1024 [MB] (11 MBps) [2024-11-21T00:20:35.651Z] Copying: 951/1024 [MB] (10 MBps) [2024-11-21T00:20:36.585Z] Copying: 962/1024 [MB] (11 MBps) [2024-11-21T00:20:37.527Z] Copying: 973/1024 [MB] (11 MBps) [2024-11-21T00:20:38.902Z] Copying: 984/1024 [MB] (10 MBps) [2024-11-21T00:20:39.836Z] Copying: 995/1024 [MB] (11 MBps) [2024-11-21T00:20:40.774Z] Copying: 1007/1024 [MB] (11 MBps) [2024-11-21T00:20:41.036Z] Copying: 1018/1024 [MB] (11 MBps) [2024-11-21T00:20:41.036Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-21 00:20:40.981979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.615 [2024-11-21 00:20:40.982059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:50.615 [2024-11-21 00:20:40.982078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:50.615 [2024-11-21 00:20:40.982093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.615 [2024-11-21 00:20:40.982117] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:50.615 [2024-11-21 00:20:40.983118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.615 [2024-11-21 00:20:40.983162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:50.615 [2024-11-21 00:20:40.983186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:31:50.615 [2024-11-21 00:20:40.983197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.615 [2024-11-21 00:20:40.986244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.615 [2024-11-21 00:20:40.986511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:50.615 [2024-11-21 00:20:40.986533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.019 ms 00:31:50.615 [2024-11-21 00:20:40.986544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.615 [2024-11-21 00:20:40.986589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.615 [2024-11-21 00:20:40.986606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:50.615 [2024-11-21 00:20:40.986616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:50.615 [2024-11-21 00:20:40.986624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.615 [2024-11-21 00:20:40.986688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.615 [2024-11-21 00:20:40.986700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:50.615 [2024-11-21 00:20:40.986715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:50.615 [2024-11-21 00:20:40.986723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.615 [2024-11-21 00:20:40.986737] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:50.615 [2024-11-21 00:20:40.986752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.986996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:50.615 [2024-11-21 00:20:40.987280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:50.616 [2024-11-21 00:20:40.987597] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:50.616 [2024-11-21 00:20:40.987606] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ebebdb1-f50a-4a1e-8f04-426f1bc9f828 00:31:50.616 [2024-11-21 00:20:40.987618] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:50.616 [2024-11-21 00:20:40.987628] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:50.616 [2024-11-21 00:20:40.987637] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:50.616 [2024-11-21 00:20:40.987645] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:50.616 [2024-11-21 00:20:40.987653] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:50.616 [2024-11-21 00:20:40.987661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:50.616 [2024-11-21 00:20:40.987668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:50.616 [2024-11-21 00:20:40.987675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:50.616 [2024-11-21 00:20:40.987682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:50.616 [2024-11-21 00:20:40.987688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.616 [2024-11-21 00:20:40.987705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:50.616 [2024-11-21 00:20:40.987718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:31:50.616 [2024-11-21 00:20:40.987729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.616 [2024-11-21 00:20:40.990955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.616 [2024-11-21 00:20:40.990993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:50.616 [2024-11-21 00:20:40.991003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.205 ms 00:31:50.616 [2024-11-21 00:20:40.991011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.616 [2024-11-21 00:20:40.991175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.616 [2024-11-21 00:20:40.991186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:50.616 [2024-11-21 00:20:40.991197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:31:50.616 [2024-11-21 00:20:40.991208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.616 [2024-11-21 00:20:41.000536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.616 [2024-11-21 00:20:41.000587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:50.616 [2024-11-21 00:20:41.000598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.616 [2024-11-21 00:20:41.000619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.616 [2024-11-21 00:20:41.000687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.616 [2024-11-21 00:20:41.000698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:50.616 [2024-11-21 00:20:41.000707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.616 [2024-11-21 00:20:41.000721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.616 [2024-11-21 00:20:41.000759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.616 [2024-11-21 00:20:41.000768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:50.616 [2024-11-21 00:20:41.000777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.616 [2024-11-21 00:20:41.000790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.616 [2024-11-21 00:20:41.000806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.616 [2024-11-21 00:20:41.000816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:50.616 [2024-11-21 00:20:41.000825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.616 [2024-11-21 00:20:41.000833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.616 [2024-11-21 00:20:41.020697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.616 [2024-11-21 00:20:41.020758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:50.616 [2024-11-21 00:20:41.020771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.616 [2024-11-21 00:20:41.020781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.877 [2024-11-21 00:20:41.036943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.877 [2024-11-21 00:20:41.037013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:50.877 [2024-11-21 00:20:41.037028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.877 [2024-11-21 00:20:41.037038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.877 [2024-11-21 00:20:41.037124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.877 [2024-11-21 00:20:41.037136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:50.877 [2024-11-21 00:20:41.037145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.877 [2024-11-21 00:20:41.037154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.877 [2024-11-21 00:20:41.037202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.877 [2024-11-21 00:20:41.037213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:50.877 [2024-11-21 00:20:41.037223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.877 [2024-11-21 00:20:41.037233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.877 [2024-11-21 00:20:41.037333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.877 [2024-11-21 00:20:41.037346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:50.877 [2024-11-21 00:20:41.037356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.877 [2024-11-21 00:20:41.037365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.877 [2024-11-21 00:20:41.037399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.877 [2024-11-21 00:20:41.037411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:50.877 [2024-11-21 00:20:41.037421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.877 [2024-11-21 00:20:41.037430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.877 [2024-11-21 00:20:41.037485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.877 [2024-11-21 00:20:41.037501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:50.877 [2024-11-21 00:20:41.037511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.877 [2024-11-21 00:20:41.037520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.877 [2024-11-21 00:20:41.037584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.877 [2024-11-21 00:20:41.037598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:50.877 [2024-11-21 00:20:41.037608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.877 [2024-11-21 00:20:41.037620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.877 [2024-11-21 00:20:41.037821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 55.784 ms, result 0 00:31:51.138 00:31:51.138 00:31:51.138 00:20:41 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:31:51.138 [2024-11-21 00:20:41.554886] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:51.138 [2024-11-21 00:20:41.555035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94940 ] 00:31:51.399 [2024-11-21 00:20:41.690649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.399 [2024-11-21 00:20:41.762807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.660 [2024-11-21 00:20:41.912856] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:51.660 [2024-11-21 00:20:41.912957] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:51.660 [2024-11-21 00:20:42.076071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.660 [2024-11-21 00:20:42.076399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:51.660 [2024-11-21 00:20:42.076435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:51.660 [2024-11-21 00:20:42.076445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.660 [2024-11-21 00:20:42.076527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.660 [2024-11-21 00:20:42.076539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:51.660 [2024-11-21 00:20:42.076550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:51.660 [2024-11-21 00:20:42.076567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.660 [2024-11-21 00:20:42.076591] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:51.660 [2024-11-21 00:20:42.077013] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:51.660 [2024-11-21 00:20:42.077051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.660 [2024-11-21 00:20:42.077065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:51.660 [2024-11-21 00:20:42.077080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:31:51.660 [2024-11-21 00:20:42.077091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.922 [2024-11-21 00:20:42.077472] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:51.922 [2024-11-21 00:20:42.077507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.922 [2024-11-21 00:20:42.077524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:51.922 [2024-11-21 00:20:42.077534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:51.922 [2024-11-21 00:20:42.077543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.922 [2024-11-21 00:20:42.077610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.922 [2024-11-21 00:20:42.077625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:51.922 [2024-11-21 00:20:42.077638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:51.922 [2024-11-21 00:20:42.077646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.922 [2024-11-21 00:20:42.078105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.922 [2024-11-21 00:20:42.078138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:51.922 [2024-11-21 00:20:42.078150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:31:51.922 [2024-11-21 00:20:42.078158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.922 [2024-11-21 00:20:42.078257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.922 [2024-11-21 00:20:42.078272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:51.922 [2024-11-21 00:20:42.078285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:31:51.922 [2024-11-21 00:20:42.078318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.922 [2024-11-21 00:20:42.078345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.922 [2024-11-21 00:20:42.078355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:51.922 [2024-11-21 00:20:42.078365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:51.922 [2024-11-21 00:20:42.078373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.922 [2024-11-21 00:20:42.078400] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:51.922 [2024-11-21 00:20:42.081345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.922 [2024-11-21 00:20:42.081393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:51.922 [2024-11-21 00:20:42.081407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.954 ms 00:31:51.922 [2024-11-21 00:20:42.081415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.922 [2024-11-21 00:20:42.081456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.922 [2024-11-21 00:20:42.081465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:51.922 [2024-11-21 00:20:42.081474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:51.922 [2024-11-21 00:20:42.081483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.922 [2024-11-21 00:20:42.081539] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:51.922 [2024-11-21 00:20:42.081567] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:51.922 [2024-11-21 00:20:42.081620] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:51.923 [2024-11-21 00:20:42.081639] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:51.923 [2024-11-21 00:20:42.081750] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:51.923 [2024-11-21 00:20:42.081764] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:51.923 [2024-11-21 00:20:42.081780] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:51.923 [2024-11-21 00:20:42.081799] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:51.923 [2024-11-21 00:20:42.081815] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:51.923 [2024-11-21 00:20:42.081835] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:51.923 [2024-11-21 00:20:42.081846] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:51.923 [2024-11-21 00:20:42.081856] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:51.923 [2024-11-21 00:20:42.081866] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:51.923 [2024-11-21 00:20:42.081878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.923 [2024-11-21 00:20:42.081889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:51.923 [2024-11-21 00:20:42.081898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:31:51.923 [2024-11-21 00:20:42.081906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.923 [2024-11-21 00:20:42.081995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.923 [2024-11-21 00:20:42.082010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:51.923 [2024-11-21 00:20:42.082020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:51.923 [2024-11-21 00:20:42.082031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.923 [2024-11-21 00:20:42.082130] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:51.923 [2024-11-21 00:20:42.082145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:51.923 [2024-11-21 00:20:42.082157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:51.923 [2024-11-21 00:20:42.082173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:51.923 [2024-11-21 00:20:42.082183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:51.923 [2024-11-21 00:20:42.082192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:51.923 [2024-11-21 00:20:42.082201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:51.923 [2024-11-21 00:20:42.082211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:51.923 [2024-11-21 00:20:42.082220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:51.923 [2024-11-21 00:20:42.082230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:51.923 [2024-11-21 00:20:42.082240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:51.923 [2024-11-21 00:20:42.082252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:51.923 [2024-11-21 00:20:42.082261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:51.923 [2024-11-21 00:20:42.082268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:51.923 [2024-11-21 00:20:42.082277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:51.923 [2024-11-21 00:20:42.082285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:51.923 [2024-11-21 00:20:42.082319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:51.923 [2024-11-21 00:20:42.082328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:51.923 [2024-11-21 00:20:42.082338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:51.923 [2024-11-21 00:20:42.082353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:51.923 [2024-11-21 00:20:42.082362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:51.923 [2024-11-21 00:20:42.082369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:51.923 [2024-11-21 00:20:42.082375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:51.923 [2024-11-21 00:20:42.082383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:51.923 [2024-11-21 00:20:42.082391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:51.923 [2024-11-21 00:20:42.082399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:51.923 [2024-11-21 00:20:42.082406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:51.923 [2024-11-21 00:20:42.082415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:51.923 [2024-11-21 00:20:42.082422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:51.923 [2024-11-21 00:20:42.082440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:51.923 [2024-11-21 00:20:42.082447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:51.923 [2024-11-21 00:20:42.082454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:51.923 [2024-11-21 00:20:42.082461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:51.923 [2024-11-21 00:20:42.082468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:51.923 [2024-11-21 00:20:42.082474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:51.923 [2024-11-21 00:20:42.082488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:51.923 [2024-11-21 00:20:42.082495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:51.923 [2024-11-21 00:20:42.082503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:51.923 [2024-11-21 00:20:42.082510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:51.923 [2024-11-21 00:20:42.082519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:51.923 [2024-11-21 00:20:42.082526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:51.923 [2024-11-21 00:20:42.082533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:51.923 [2024-11-21 00:20:42.082540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:51.923 [2024-11-21 00:20:42.082548] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:51.923 [2024-11-21 00:20:42.082556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:51.923 [2024-11-21 00:20:42.082564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:51.923 [2024-11-21 00:20:42.082577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:51.923 [2024-11-21 00:20:42.082585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:51.923 [2024-11-21 00:20:42.082592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:51.923 [2024-11-21 00:20:42.082600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:51.923 [2024-11-21 00:20:42.082608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:51.923 [2024-11-21 00:20:42.082618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:51.923 [2024-11-21 00:20:42.082625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:51.923 [2024-11-21 00:20:42.082634] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:51.923 [2024-11-21 00:20:42.082647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:51.923 [2024-11-21 00:20:42.082656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:51.923 [2024-11-21 00:20:42.082664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:51.923 [2024-11-21 00:20:42.082672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:51.923 [2024-11-21 00:20:42.082680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:51.923 [2024-11-21 00:20:42.082687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:51.923 [2024-11-21 00:20:42.082694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:51.923 [2024-11-21 00:20:42.082702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:51.923 [2024-11-21 00:20:42.082709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:51.923 [2024-11-21 00:20:42.082716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:51.923 [2024-11-21 00:20:42.082723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:51.923 [2024-11-21 00:20:42.082730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:51.923 [2024-11-21 00:20:42.082745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:51.923 [2024-11-21 00:20:42.082755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:51.923 [2024-11-21 00:20:42.082762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:51.923 [2024-11-21 00:20:42.082769] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:51.923 [2024-11-21 00:20:42.082783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:51.923 [2024-11-21 00:20:42.082793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:51.923 [2024-11-21 00:20:42.082801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:51.923 [2024-11-21 00:20:42.082808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:51.923 [2024-11-21 00:20:42.082816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:51.923 [2024-11-21 00:20:42.082827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.923 [2024-11-21 00:20:42.082837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:51.923 [2024-11-21 00:20:42.082845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:31:51.923 [2024-11-21 00:20:42.082853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.923 [2024-11-21 00:20:42.105001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.105063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:51.924 [2024-11-21 00:20:42.105083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.099 ms 00:31:51.924 [2024-11-21 00:20:42.105094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.105209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.105223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:51.924 [2024-11-21 00:20:42.105242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:31:51.924 [2024-11-21 00:20:42.105252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.121546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.121594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:51.924 [2024-11-21 00:20:42.121610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.188 ms 00:31:51.924 [2024-11-21 00:20:42.121620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.121659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.121670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:51.924 [2024-11-21 00:20:42.121681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:51.924 [2024-11-21 00:20:42.121690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.121790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.121802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:51.924 [2024-11-21 00:20:42.121814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:51.924 [2024-11-21 00:20:42.121831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.121968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.121978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:51.924 [2024-11-21 00:20:42.121991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:31:51.924 [2024-11-21 00:20:42.122003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.131732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.131987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:51.924 [2024-11-21 00:20:42.132006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.707 ms 00:31:51.924 [2024-11-21 00:20:42.132015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.132183] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:51.924 [2024-11-21 00:20:42.132199] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:51.924 [2024-11-21 00:20:42.132214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.132223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:51.924 [2024-11-21 00:20:42.132234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:31:51.924 [2024-11-21 00:20:42.132242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.144571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.144618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:51.924 [2024-11-21 00:20:42.144629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.312 ms 00:31:51.924 [2024-11-21 00:20:42.144640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.144783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.144800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:51.924 [2024-11-21 00:20:42.144813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:31:51.924 [2024-11-21 00:20:42.144826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.144876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.144893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:51.924 [2024-11-21 00:20:42.144906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:51.924 [2024-11-21 00:20:42.144918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.145243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.145255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:51.924 [2024-11-21 00:20:42.145265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:31:51.924 [2024-11-21 00:20:42.145276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.145292] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:51.924 [2024-11-21 00:20:42.145339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.145350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:51.924 [2024-11-21 00:20:42.145359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:51.924 [2024-11-21 00:20:42.145370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.156253] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:51.924 [2024-11-21 00:20:42.156428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.156441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:51.924 [2024-11-21 00:20:42.156452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.034 ms 00:31:51.924 [2024-11-21 00:20:42.156461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.159068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.159246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:51.924 [2024-11-21 00:20:42.159264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:31:51.924 [2024-11-21 00:20:42.159274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.159398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.159416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:51.924 [2024-11-21 00:20:42.159429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:51.924 [2024-11-21 00:20:42.159437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.159467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.159480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:51.924 [2024-11-21 00:20:42.159488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:51.924 [2024-11-21 00:20:42.159496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.159539] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:51.924 [2024-11-21 00:20:42.159558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.159566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:51.924 [2024-11-21 00:20:42.159574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:51.924 [2024-11-21 00:20:42.159585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.166959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.167014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:51.924 [2024-11-21 00:20:42.167036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.352 ms 00:31:51.924 [2024-11-21 00:20:42.167044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.167145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.924 [2024-11-21 00:20:42.167156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:51.924 [2024-11-21 00:20:42.167166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:51.924 [2024-11-21 00:20:42.167178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.924 [2024-11-21 00:20:42.168663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 92.030 ms, result 0 00:31:53.306  [2024-11-21T00:20:44.668Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-21T00:20:45.610Z] Copying: 22/1024 [MB] (11 MBps) [2024-11-21T00:20:46.553Z] Copying: 33/1024 [MB] (11 MBps) [2024-11-21T00:20:47.496Z] Copying: 44/1024 [MB] (11 MBps) [2024-11-21T00:20:48.441Z] Copying: 56/1024 [MB] (11 MBps) [2024-11-21T00:20:49.388Z] Copying: 67/1024 [MB] (11 MBps) [2024-11-21T00:20:50.773Z] Copying: 80/1024 [MB] (13 MBps) [2024-11-21T00:20:51.714Z] Copying: 94/1024 [MB] (14 MBps) [2024-11-21T00:20:52.655Z] Copying: 108/1024 [MB] (14 MBps) [2024-11-21T00:20:53.595Z] Copying: 125/1024 [MB] (16 MBps) [2024-11-21T00:20:54.536Z] Copying: 145/1024 [MB] (19 MBps) [2024-11-21T00:20:55.481Z] Copying: 163/1024 [MB] (18 MBps) [2024-11-21T00:20:56.423Z] Copying: 184/1024 [MB] (20 MBps) [2024-11-21T00:20:57.367Z] Copying: 203/1024 [MB] (19 MBps) [2024-11-21T00:20:58.752Z] Copying: 225/1024 [MB] (21 MBps) [2024-11-21T00:20:59.698Z] Copying: 244/1024 [MB] (19 MBps) [2024-11-21T00:21:00.641Z] Copying: 267/1024 [MB] (22 MBps) [2024-11-21T00:21:01.586Z] Copying: 291/1024 [MB] (24 MBps) [2024-11-21T00:21:02.530Z] Copying: 311/1024 [MB] (19 MBps) [2024-11-21T00:21:03.474Z] Copying: 329/1024 [MB] (17 MBps) [2024-11-21T00:21:04.415Z] Copying: 348/1024 [MB] (18 MBps) [2024-11-21T00:21:05.399Z] Copying: 363/1024 [MB] (15 MBps) [2024-11-21T00:21:06.803Z] Copying: 378/1024 [MB] (14 MBps) [2024-11-21T00:21:07.375Z] Copying: 399/1024 [MB] (20 MBps) [2024-11-21T00:21:08.763Z] Copying: 416/1024 [MB] (17 MBps) [2024-11-21T00:21:09.374Z] Copying: 432/1024 [MB] (16 MBps) [2024-11-21T00:21:10.755Z] Copying: 444/1024 [MB] (11 MBps) [2024-11-21T00:21:11.698Z] Copying: 454/1024 [MB] (10 MBps) [2024-11-21T00:21:12.638Z] Copying: 465/1024 [MB] (10 MBps) [2024-11-21T00:21:13.581Z] Copying: 476/1024 [MB] (11 MBps) [2024-11-21T00:21:14.525Z] Copying: 487/1024 [MB] (11 MBps) [2024-11-21T00:21:15.469Z] Copying: 498/1024 [MB] (10 MBps) [2024-11-21T00:21:16.412Z] Copying: 509/1024 [MB] (10 MBps) [2024-11-21T00:21:17.800Z] Copying: 519/1024 [MB] (10 MBps) [2024-11-21T00:21:18.373Z] Copying: 530/1024 [MB] (10 MBps) [2024-11-21T00:21:19.771Z] Copying: 544/1024 [MB] (13 MBps) [2024-11-21T00:21:20.715Z] Copying: 558/1024 [MB] (14 MBps) [2024-11-21T00:21:21.661Z] Copying: 570/1024 [MB] (12 MBps) [2024-11-21T00:21:22.609Z] Copying: 586/1024 [MB] (16 MBps) [2024-11-21T00:21:23.554Z] Copying: 599/1024 [MB] (12 MBps) [2024-11-21T00:21:24.499Z] Copying: 611/1024 [MB] (12 MBps) [2024-11-21T00:21:25.444Z] Copying: 631/1024 [MB] (19 MBps) [2024-11-21T00:21:26.388Z] Copying: 651/1024 [MB] (19 MBps) [2024-11-21T00:21:27.775Z] Copying: 665/1024 [MB] (14 MBps) [2024-11-21T00:21:28.721Z] Copying: 684/1024 [MB] (18 MBps) [2024-11-21T00:21:29.666Z] Copying: 703/1024 [MB] (19 MBps) [2024-11-21T00:21:30.610Z] Copying: 720/1024 [MB] (16 MBps) [2024-11-21T00:21:31.557Z] Copying: 737/1024 [MB] (16 MBps) [2024-11-21T00:21:32.501Z] Copying: 760/1024 [MB] (23 MBps) [2024-11-21T00:21:33.444Z] Copying: 774/1024 [MB] (14 MBps) [2024-11-21T00:21:34.387Z] Copying: 791/1024 [MB] (16 MBps) [2024-11-21T00:21:35.774Z] Copying: 813/1024 [MB] (22 MBps) [2024-11-21T00:21:36.717Z] Copying: 835/1024 [MB] (22 MBps) [2024-11-21T00:21:37.723Z] Copying: 854/1024 [MB] (18 MBps) [2024-11-21T00:21:38.661Z] Copying: 866/1024 [MB] (12 MBps) [2024-11-21T00:21:39.594Z] Copying: 878/1024 [MB] (11 MBps) [2024-11-21T00:21:40.530Z] Copying: 889/1024 [MB] (11 MBps) [2024-11-21T00:21:41.465Z] Copying: 901/1024 [MB] (11 MBps) [2024-11-21T00:21:42.403Z] Copying: 912/1024 [MB] (11 MBps) [2024-11-21T00:21:43.786Z] Copying: 924/1024 [MB] (11 MBps) [2024-11-21T00:21:44.361Z] Copying: 936/1024 [MB] (12 MBps) [2024-11-21T00:21:45.760Z] Copying: 954/1024 [MB] (18 MBps) [2024-11-21T00:21:46.703Z] Copying: 966/1024 [MB] (11 MBps) [2024-11-21T00:21:47.648Z] Copying: 984/1024 [MB] (18 MBps) [2024-11-21T00:21:48.592Z] Copying: 995/1024 [MB] (10 MBps) [2024-11-21T00:21:49.533Z] Copying: 1006/1024 [MB] (11 MBps) [2024-11-21T00:21:50.105Z] Copying: 1018/1024 [MB] (11 MBps) [2024-11-21T00:21:50.105Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-21 00:21:49.930364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.684 [2024-11-21 00:21:49.930442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:59.684 [2024-11-21 00:21:49.930460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:59.684 [2024-11-21 00:21:49.930478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.684 [2024-11-21 00:21:49.930507] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:59.684 [2024-11-21 00:21:49.931698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.684 [2024-11-21 00:21:49.931776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:59.684 [2024-11-21 00:21:49.931889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:32:59.684 [2024-11-21 00:21:49.931914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.684 [2024-11-21 00:21:49.932187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.684 [2024-11-21 00:21:49.932327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:59.684 [2024-11-21 00:21:49.932357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:32:59.684 [2024-11-21 00:21:49.932536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.684 [2024-11-21 00:21:49.932629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.684 [2024-11-21 00:21:49.932654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:59.684 [2024-11-21 00:21:49.932855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:59.684 [2024-11-21 00:21:49.932904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.684 [2024-11-21 00:21:49.932998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.684 [2024-11-21 00:21:49.933009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:59.684 [2024-11-21 00:21:49.933019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:59.685 [2024-11-21 00:21:49.933028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.933043] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:59.685 [2024-11-21 00:21:49.933058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:59.685 [2024-11-21 00:21:49.933915] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:59.685 [2024-11-21 00:21:49.933923] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ebebdb1-f50a-4a1e-8f04-426f1bc9f828 00:32:59.685 [2024-11-21 00:21:49.933935] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:59.685 [2024-11-21 00:21:49.933946] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:59.685 [2024-11-21 00:21:49.933955] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:59.685 [2024-11-21 00:21:49.933964] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:59.685 [2024-11-21 00:21:49.933971] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:59.685 [2024-11-21 00:21:49.933984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:59.685 [2024-11-21 00:21:49.933992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:59.685 [2024-11-21 00:21:49.933998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:59.685 [2024-11-21 00:21:49.934005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:59.685 [2024-11-21 00:21:49.934012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.685 [2024-11-21 00:21:49.934020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:59.685 [2024-11-21 00:21:49.934028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:32:59.685 [2024-11-21 00:21:49.934036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.938070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.685 [2024-11-21 00:21:49.938109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:59.685 [2024-11-21 00:21:49.938121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.019 ms 00:32:59.685 [2024-11-21 00:21:49.938129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.938283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.685 [2024-11-21 00:21:49.938317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:59.685 [2024-11-21 00:21:49.938329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:32:59.685 [2024-11-21 00:21:49.938347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.947764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.685 [2024-11-21 00:21:49.947821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:59.685 [2024-11-21 00:21:49.947833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.685 [2024-11-21 00:21:49.947842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.947906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.685 [2024-11-21 00:21:49.947916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:59.685 [2024-11-21 00:21:49.947924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.685 [2024-11-21 00:21:49.947945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.947985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.685 [2024-11-21 00:21:49.947996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:59.685 [2024-11-21 00:21:49.948010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.685 [2024-11-21 00:21:49.948019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.948035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.685 [2024-11-21 00:21:49.948043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:59.685 [2024-11-21 00:21:49.948051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.685 [2024-11-21 00:21:49.948058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.967368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.685 [2024-11-21 00:21:49.967421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:59.685 [2024-11-21 00:21:49.967433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.685 [2024-11-21 00:21:49.967442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.982847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.685 [2024-11-21 00:21:49.982899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:59.685 [2024-11-21 00:21:49.982913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.685 [2024-11-21 00:21:49.982922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.983029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.685 [2024-11-21 00:21:49.983041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:59.685 [2024-11-21 00:21:49.983057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.685 [2024-11-21 00:21:49.983066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.983107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.685 [2024-11-21 00:21:49.983118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:59.685 [2024-11-21 00:21:49.983127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.685 [2024-11-21 00:21:49.983136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.983197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.685 [2024-11-21 00:21:49.983211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:59.685 [2024-11-21 00:21:49.983220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.685 [2024-11-21 00:21:49.983229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.983257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.685 [2024-11-21 00:21:49.983267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:59.685 [2024-11-21 00:21:49.983275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.685 [2024-11-21 00:21:49.983283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.983355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.685 [2024-11-21 00:21:49.983367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:59.685 [2024-11-21 00:21:49.983380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.685 [2024-11-21 00:21:49.983390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.983450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.685 [2024-11-21 00:21:49.983464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:59.685 [2024-11-21 00:21:49.983474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.685 [2024-11-21 00:21:49.983483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.685 [2024-11-21 00:21:49.983644] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 53.237 ms, result 0 00:32:59.945 00:32:59.945 00:32:59.945 00:21:50 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:02.485 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:02.485 00:21:52 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:02.485 [2024-11-21 00:21:52.420984] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:33:02.485 [2024-11-21 00:21:52.421075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95667 ] 00:33:02.485 [2024-11-21 00:21:52.553198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.485 [2024-11-21 00:21:52.615870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.485 [2024-11-21 00:21:52.767038] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:02.485 [2024-11-21 00:21:52.767549] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:02.747 [2024-11-21 00:21:52.931740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.747 [2024-11-21 00:21:52.931802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:02.747 [2024-11-21 00:21:52.931823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:02.747 [2024-11-21 00:21:52.931833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.747 [2024-11-21 00:21:52.931896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.747 [2024-11-21 00:21:52.931910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:02.747 [2024-11-21 00:21:52.931919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:33:02.747 [2024-11-21 00:21:52.931936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.747 [2024-11-21 00:21:52.931962] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:02.747 [2024-11-21 00:21:52.932277] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:02.747 [2024-11-21 00:21:52.932335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.747 [2024-11-21 00:21:52.932345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:02.747 [2024-11-21 00:21:52.932363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:33:02.747 [2024-11-21 00:21:52.932373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.747 [2024-11-21 00:21:52.932757] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:02.747 [2024-11-21 00:21:52.932800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.747 [2024-11-21 00:21:52.932810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:02.747 [2024-11-21 00:21:52.932821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:33:02.747 [2024-11-21 00:21:52.932832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.747 [2024-11-21 00:21:52.932904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.747 [2024-11-21 00:21:52.932923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:02.747 [2024-11-21 00:21:52.932936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:33:02.747 [2024-11-21 00:21:52.932944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.747 [2024-11-21 00:21:52.933214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.747 [2024-11-21 00:21:52.933237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:02.747 [2024-11-21 00:21:52.933247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:33:02.747 [2024-11-21 00:21:52.933256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.747 [2024-11-21 00:21:52.933366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.747 [2024-11-21 00:21:52.933382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:02.747 [2024-11-21 00:21:52.933392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:33:02.747 [2024-11-21 00:21:52.933400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.747 [2024-11-21 00:21:52.933425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.747 [2024-11-21 00:21:52.933437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:02.747 [2024-11-21 00:21:52.933446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:02.747 [2024-11-21 00:21:52.933454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.747 [2024-11-21 00:21:52.933482] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:02.747 [2024-11-21 00:21:52.936355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.747 [2024-11-21 00:21:52.936406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:02.747 [2024-11-21 00:21:52.936420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.879 ms 00:33:02.747 [2024-11-21 00:21:52.936429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.747 [2024-11-21 00:21:52.936471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.747 [2024-11-21 00:21:52.936483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:02.747 [2024-11-21 00:21:52.936491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:33:02.747 [2024-11-21 00:21:52.936499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.747 [2024-11-21 00:21:52.936552] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:02.747 [2024-11-21 00:21:52.936579] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:02.747 [2024-11-21 00:21:52.936630] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:02.747 [2024-11-21 00:21:52.936648] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:02.747 [2024-11-21 00:21:52.936758] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:02.747 [2024-11-21 00:21:52.936771] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:02.747 [2024-11-21 00:21:52.936785] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:02.748 [2024-11-21 00:21:52.936797] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:02.748 [2024-11-21 00:21:52.936807] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:02.748 [2024-11-21 00:21:52.936820] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:02.748 [2024-11-21 00:21:52.936835] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:02.748 [2024-11-21 00:21:52.936843] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:02.748 [2024-11-21 00:21:52.936851] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:02.748 [2024-11-21 00:21:52.936860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.748 [2024-11-21 00:21:52.936872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:02.748 [2024-11-21 00:21:52.936882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:33:02.748 [2024-11-21 00:21:52.936894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.748 [2024-11-21 00:21:52.936978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.748 [2024-11-21 00:21:52.936993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:02.748 [2024-11-21 00:21:52.937002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:02.748 [2024-11-21 00:21:52.937013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.748 [2024-11-21 00:21:52.937121] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:02.748 [2024-11-21 00:21:52.937135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:02.748 [2024-11-21 00:21:52.937145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:02.748 [2024-11-21 00:21:52.937157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:02.748 [2024-11-21 00:21:52.937168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:02.748 [2024-11-21 00:21:52.937177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:02.748 [2024-11-21 00:21:52.937186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:02.748 [2024-11-21 00:21:52.937195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:02.748 [2024-11-21 00:21:52.937204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:02.748 [2024-11-21 00:21:52.937213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:02.748 [2024-11-21 00:21:52.937223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:02.748 [2024-11-21 00:21:52.937231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:02.748 [2024-11-21 00:21:52.937239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:02.748 [2024-11-21 00:21:52.937247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:02.748 [2024-11-21 00:21:52.937258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:02.748 [2024-11-21 00:21:52.937266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:02.748 [2024-11-21 00:21:52.937275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:02.748 [2024-11-21 00:21:52.937283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:02.748 [2024-11-21 00:21:52.937290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:02.748 [2024-11-21 00:21:52.937334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:02.748 [2024-11-21 00:21:52.937343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:02.748 [2024-11-21 00:21:52.937353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:02.748 [2024-11-21 00:21:52.937364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:02.748 [2024-11-21 00:21:52.937375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:02.748 [2024-11-21 00:21:52.937386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:02.748 [2024-11-21 00:21:52.937395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:02.748 [2024-11-21 00:21:52.937403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:02.748 [2024-11-21 00:21:52.937412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:02.748 [2024-11-21 00:21:52.937420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:02.748 [2024-11-21 00:21:52.937427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:02.748 [2024-11-21 00:21:52.937434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:02.748 [2024-11-21 00:21:52.937442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:02.748 [2024-11-21 00:21:52.937451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:02.748 [2024-11-21 00:21:52.937458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:02.748 [2024-11-21 00:21:52.937466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:02.748 [2024-11-21 00:21:52.937477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:02.748 [2024-11-21 00:21:52.937485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:02.748 [2024-11-21 00:21:52.937493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:02.748 [2024-11-21 00:21:52.937500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:02.748 [2024-11-21 00:21:52.937507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:02.748 [2024-11-21 00:21:52.937514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:02.748 [2024-11-21 00:21:52.937521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:02.748 [2024-11-21 00:21:52.937531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:02.748 [2024-11-21 00:21:52.937538] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:02.748 [2024-11-21 00:21:52.937546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:02.748 [2024-11-21 00:21:52.937554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:02.748 [2024-11-21 00:21:52.937561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:02.748 [2024-11-21 00:21:52.937570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:02.748 [2024-11-21 00:21:52.937578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:02.748 [2024-11-21 00:21:52.937586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:02.748 [2024-11-21 00:21:52.937593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:02.748 [2024-11-21 00:21:52.937602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:02.748 [2024-11-21 00:21:52.937609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:02.748 [2024-11-21 00:21:52.937619] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:02.748 [2024-11-21 00:21:52.937631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:02.748 [2024-11-21 00:21:52.937640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:02.748 [2024-11-21 00:21:52.937651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:02.748 [2024-11-21 00:21:52.937661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:02.748 [2024-11-21 00:21:52.937669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:02.748 [2024-11-21 00:21:52.937676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:02.748 [2024-11-21 00:21:52.937685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:02.748 [2024-11-21 00:21:52.937693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:02.748 [2024-11-21 00:21:52.937701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:02.748 [2024-11-21 00:21:52.937708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:02.748 [2024-11-21 00:21:52.937716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:02.748 [2024-11-21 00:21:52.937723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:02.748 [2024-11-21 00:21:52.937738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:02.748 [2024-11-21 00:21:52.937748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:02.748 [2024-11-21 00:21:52.937756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:02.748 [2024-11-21 00:21:52.937763] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:02.748 [2024-11-21 00:21:52.937773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:02.748 [2024-11-21 00:21:52.937783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:02.748 [2024-11-21 00:21:52.937790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:02.748 [2024-11-21 00:21:52.937798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:02.748 [2024-11-21 00:21:52.937805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:02.748 [2024-11-21 00:21:52.937813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.748 [2024-11-21 00:21:52.937823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:02.748 [2024-11-21 00:21:52.937830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:33:02.748 [2024-11-21 00:21:52.937838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.748 [2024-11-21 00:21:52.959840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.748 [2024-11-21 00:21:52.959899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:02.748 [2024-11-21 00:21:52.959917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.954 ms 00:33:02.748 [2024-11-21 00:21:52.959927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:52.960033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:52.960046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:02.749 [2024-11-21 00:21:52.960056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:33:02.749 [2024-11-21 00:21:52.960063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:52.976343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:52.976391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:02.749 [2024-11-21 00:21:52.976410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.191 ms 00:33:02.749 [2024-11-21 00:21:52.976426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:52.976467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:52.976480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:02.749 [2024-11-21 00:21:52.976495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:02.749 [2024-11-21 00:21:52.976504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:52.976609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:52.976623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:02.749 [2024-11-21 00:21:52.976633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:33:02.749 [2024-11-21 00:21:52.976647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:52.976787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:52.976800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:02.749 [2024-11-21 00:21:52.976814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:33:02.749 [2024-11-21 00:21:52.976825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:52.986627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:52.986831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:02.749 [2024-11-21 00:21:52.986852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.779 ms 00:33:02.749 [2024-11-21 00:21:52.986861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:52.987020] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:02.749 [2024-11-21 00:21:52.987037] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:02.749 [2024-11-21 00:21:52.987048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:52.987058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:02.749 [2024-11-21 00:21:52.987069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:02.749 [2024-11-21 00:21:52.987078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:52.999447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:52.999498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:02.749 [2024-11-21 00:21:52.999510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.351 ms 00:33:02.749 [2024-11-21 00:21:52.999523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:52.999667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:52.999678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:02.749 [2024-11-21 00:21:52.999688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:33:02.749 [2024-11-21 00:21:52.999699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:52.999759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:52.999775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:02.749 [2024-11-21 00:21:52.999784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:02.749 [2024-11-21 00:21:52.999799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:53.000165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:53.000194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:02.749 [2024-11-21 00:21:53.000204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:33:02.749 [2024-11-21 00:21:53.000217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:53.000238] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:02.749 [2024-11-21 00:21:53.000250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:53.000258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:02.749 [2024-11-21 00:21:53.000267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:02.749 [2024-11-21 00:21:53.000278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:53.011008] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:02.749 [2024-11-21 00:21:53.011185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:53.011199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:02.749 [2024-11-21 00:21:53.011211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.858 ms 00:33:02.749 [2024-11-21 00:21:53.011220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:53.013835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:53.014016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:02.749 [2024-11-21 00:21:53.014035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.583 ms 00:33:02.749 [2024-11-21 00:21:53.014043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:53.014169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:53.014184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:02.749 [2024-11-21 00:21:53.014201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:33:02.749 [2024-11-21 00:21:53.014210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:53.014242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:53.014256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:02.749 [2024-11-21 00:21:53.014267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:02.749 [2024-11-21 00:21:53.014275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:53.014341] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:02.749 [2024-11-21 00:21:53.014357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:53.014367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:02.749 [2024-11-21 00:21:53.014376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:33:02.749 [2024-11-21 00:21:53.014385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:53.022185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:53.022241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:02.749 [2024-11-21 00:21:53.022261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.776 ms 00:33:02.749 [2024-11-21 00:21:53.022270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:53.022406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.749 [2024-11-21 00:21:53.022420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:02.749 [2024-11-21 00:21:53.022430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:33:02.749 [2024-11-21 00:21:53.022442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.749 [2024-11-21 00:21:53.023929] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 91.692 ms, result 0 00:33:03.691  [2024-11-21T00:21:55.055Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-21T00:21:56.439Z] Copying: 29/1024 [MB] (16 MBps) [2024-11-21T00:21:57.374Z] Copying: 48/1024 [MB] (18 MBps) [2024-11-21T00:21:58.310Z] Copying: 65/1024 [MB] (17 MBps) [2024-11-21T00:21:59.255Z] Copying: 80/1024 [MB] (15 MBps) [2024-11-21T00:22:00.201Z] Copying: 102/1024 [MB] (21 MBps) [2024-11-21T00:22:01.138Z] Copying: 116/1024 [MB] (13 MBps) [2024-11-21T00:22:02.074Z] Copying: 134/1024 [MB] (17 MBps) [2024-11-21T00:22:03.451Z] Copying: 156/1024 [MB] (21 MBps) [2024-11-21T00:22:04.389Z] Copying: 176/1024 [MB] (20 MBps) [2024-11-21T00:22:05.386Z] Copying: 194/1024 [MB] (17 MBps) [2024-11-21T00:22:06.321Z] Copying: 215/1024 [MB] (20 MBps) [2024-11-21T00:22:07.256Z] Copying: 235/1024 [MB] (20 MBps) [2024-11-21T00:22:08.191Z] Copying: 247/1024 [MB] (11 MBps) [2024-11-21T00:22:09.126Z] Copying: 258/1024 [MB] (11 MBps) [2024-11-21T00:22:10.069Z] Copying: 270/1024 [MB] (11 MBps) [2024-11-21T00:22:11.445Z] Copying: 281/1024 [MB] (11 MBps) [2024-11-21T00:22:12.380Z] Copying: 292/1024 [MB] (10 MBps) [2024-11-21T00:22:13.315Z] Copying: 303/1024 [MB] (11 MBps) [2024-11-21T00:22:14.249Z] Copying: 314/1024 [MB] (10 MBps) [2024-11-21T00:22:15.181Z] Copying: 325/1024 [MB] (11 MBps) [2024-11-21T00:22:16.115Z] Copying: 337/1024 [MB] (11 MBps) [2024-11-21T00:22:17.051Z] Copying: 348/1024 [MB] (11 MBps) [2024-11-21T00:22:18.425Z] Copying: 360/1024 [MB] (11 MBps) [2024-11-21T00:22:19.358Z] Copying: 371/1024 [MB] (11 MBps) [2024-11-21T00:22:20.292Z] Copying: 383/1024 [MB] (11 MBps) [2024-11-21T00:22:21.227Z] Copying: 394/1024 [MB] (11 MBps) [2024-11-21T00:22:22.161Z] Copying: 405/1024 [MB] (11 MBps) [2024-11-21T00:22:23.096Z] Copying: 417/1024 [MB] (11 MBps) [2024-11-21T00:22:24.050Z] Copying: 429/1024 [MB] (11 MBps) [2024-11-21T00:22:25.429Z] Copying: 440/1024 [MB] (11 MBps) [2024-11-21T00:22:26.363Z] Copying: 450/1024 [MB] (10 MBps) [2024-11-21T00:22:27.297Z] Copying: 462/1024 [MB] (11 MBps) [2024-11-21T00:22:28.232Z] Copying: 474/1024 [MB] (11 MBps) [2024-11-21T00:22:29.167Z] Copying: 485/1024 [MB] (11 MBps) [2024-11-21T00:22:30.106Z] Copying: 497/1024 [MB] (11 MBps) [2024-11-21T00:22:31.041Z] Copying: 508/1024 [MB] (10 MBps) [2024-11-21T00:22:32.421Z] Copying: 519/1024 [MB] (11 MBps) [2024-11-21T00:22:33.362Z] Copying: 530/1024 [MB] (11 MBps) [2024-11-21T00:22:34.354Z] Copying: 541/1024 [MB] (10 MBps) [2024-11-21T00:22:35.290Z] Copying: 553/1024 [MB] (11 MBps) [2024-11-21T00:22:36.225Z] Copying: 564/1024 [MB] (11 MBps) [2024-11-21T00:22:37.161Z] Copying: 579/1024 [MB] (14 MBps) [2024-11-21T00:22:38.096Z] Copying: 590/1024 [MB] (11 MBps) [2024-11-21T00:22:39.471Z] Copying: 602/1024 [MB] (11 MBps) [2024-11-21T00:22:40.405Z] Copying: 613/1024 [MB] (11 MBps) [2024-11-21T00:22:41.341Z] Copying: 628/1024 [MB] (15 MBps) [2024-11-21T00:22:42.277Z] Copying: 639/1024 [MB] (11 MBps) [2024-11-21T00:22:43.214Z] Copying: 650/1024 [MB] (11 MBps) [2024-11-21T00:22:44.153Z] Copying: 662/1024 [MB] (11 MBps) [2024-11-21T00:22:45.088Z] Copying: 672/1024 [MB] (10 MBps) [2024-11-21T00:22:46.463Z] Copying: 684/1024 [MB] (11 MBps) [2024-11-21T00:22:47.397Z] Copying: 695/1024 [MB] (11 MBps) [2024-11-21T00:22:48.333Z] Copying: 707/1024 [MB] (11 MBps) [2024-11-21T00:22:49.268Z] Copying: 719/1024 [MB] (11 MBps) [2024-11-21T00:22:50.202Z] Copying: 730/1024 [MB] (11 MBps) [2024-11-21T00:22:51.138Z] Copying: 745/1024 [MB] (14 MBps) [2024-11-21T00:22:52.073Z] Copying: 756/1024 [MB] (11 MBps) [2024-11-21T00:22:53.449Z] Copying: 768/1024 [MB] (11 MBps) [2024-11-21T00:22:54.384Z] Copying: 780/1024 [MB] (12 MBps) [2024-11-21T00:22:55.321Z] Copying: 792/1024 [MB] (11 MBps) [2024-11-21T00:22:56.274Z] Copying: 803/1024 [MB] (11 MBps) [2024-11-21T00:22:57.212Z] Copying: 814/1024 [MB] (11 MBps) [2024-11-21T00:22:58.155Z] Copying: 826/1024 [MB] (11 MBps) [2024-11-21T00:22:59.090Z] Copying: 836/1024 [MB] (10 MBps) [2024-11-21T00:23:00.467Z] Copying: 850/1024 [MB] (13 MBps) [2024-11-21T00:23:01.038Z] Copying: 861/1024 [MB] (11 MBps) [2024-11-21T00:23:02.448Z] Copying: 871/1024 [MB] (10 MBps) [2024-11-21T00:23:03.390Z] Copying: 883/1024 [MB] (11 MBps) [2024-11-21T00:23:04.324Z] Copying: 894/1024 [MB] (11 MBps) [2024-11-21T00:23:05.265Z] Copying: 905/1024 [MB] (11 MBps) [2024-11-21T00:23:06.200Z] Copying: 916/1024 [MB] (10 MBps) [2024-11-21T00:23:07.135Z] Copying: 928/1024 [MB] (11 MBps) [2024-11-21T00:23:08.069Z] Copying: 939/1024 [MB] (11 MBps) [2024-11-21T00:23:09.445Z] Copying: 951/1024 [MB] (11 MBps) [2024-11-21T00:23:10.378Z] Copying: 963/1024 [MB] (11 MBps) [2024-11-21T00:23:11.312Z] Copying: 975/1024 [MB] (11 MBps) [2024-11-21T00:23:12.247Z] Copying: 986/1024 [MB] (11 MBps) [2024-11-21T00:23:13.183Z] Copying: 998/1024 [MB] (11 MBps) [2024-11-21T00:23:14.126Z] Copying: 1009/1024 [MB] (11 MBps) [2024-11-21T00:23:15.069Z] Copying: 1019/1024 [MB] (10 MBps) [2024-11-21T00:23:15.331Z] Copying: 1048336/1048576 [kB] (4080 kBps) [2024-11-21T00:23:15.331Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-21 00:23:15.322220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.910 [2024-11-21 00:23:15.322322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:24.910 [2024-11-21 00:23:15.322344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:24.910 [2024-11-21 00:23:15.322354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.910 [2024-11-21 00:23:15.325008] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:25.173 [2024-11-21 00:23:15.327723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.173 [2024-11-21 00:23:15.327767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:25.173 [2024-11-21 00:23:15.327780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.654 ms 00:34:25.173 [2024-11-21 00:23:15.327790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.173 [2024-11-21 00:23:15.340759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.173 [2024-11-21 00:23:15.340807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:25.173 [2024-11-21 00:23:15.340828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.089 ms 00:34:25.173 [2024-11-21 00:23:15.340838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.173 [2024-11-21 00:23:15.340902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.173 [2024-11-21 00:23:15.340914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:25.173 [2024-11-21 00:23:15.340929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:25.173 [2024-11-21 00:23:15.340938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.173 [2024-11-21 00:23:15.341001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.173 [2024-11-21 00:23:15.341012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:25.173 [2024-11-21 00:23:15.341022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:34:25.173 [2024-11-21 00:23:15.341037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.173 [2024-11-21 00:23:15.341052] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:25.173 [2024-11-21 00:23:15.341065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126976 / 261120 wr_cnt: 1 state: open 00:34:25.173 [2024-11-21 00:23:15.341076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:25.173 [2024-11-21 00:23:15.341377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:25.174 [2024-11-21 00:23:15.341896] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:25.174 [2024-11-21 00:23:15.341905] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ebebdb1-f50a-4a1e-8f04-426f1bc9f828 00:34:25.174 [2024-11-21 00:23:15.341919] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126976 00:34:25.174 [2024-11-21 00:23:15.341927] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127008 00:34:25.174 [2024-11-21 00:23:15.341934] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126976 00:34:25.174 [2024-11-21 00:23:15.341943] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:34:25.174 [2024-11-21 00:23:15.341952] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:25.174 [2024-11-21 00:23:15.341965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:25.174 [2024-11-21 00:23:15.341983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:25.174 [2024-11-21 00:23:15.341989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:25.174 [2024-11-21 00:23:15.341997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:25.174 [2024-11-21 00:23:15.342005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.174 [2024-11-21 00:23:15.342014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:25.174 [2024-11-21 00:23:15.342022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:34:25.175 [2024-11-21 00:23:15.342030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.345520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.175 [2024-11-21 00:23:15.345558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:25.175 [2024-11-21 00:23:15.345570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.474 ms 00:34:25.175 [2024-11-21 00:23:15.345581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.345763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.175 [2024-11-21 00:23:15.345775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:25.175 [2024-11-21 00:23:15.345788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:34:25.175 [2024-11-21 00:23:15.345796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.355012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.175 [2024-11-21 00:23:15.355179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:25.175 [2024-11-21 00:23:15.355247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.175 [2024-11-21 00:23:15.355271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.355367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.175 [2024-11-21 00:23:15.355393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:25.175 [2024-11-21 00:23:15.355413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.175 [2024-11-21 00:23:15.355432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.355500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.175 [2024-11-21 00:23:15.355532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:25.175 [2024-11-21 00:23:15.355559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.175 [2024-11-21 00:23:15.355632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.355672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.175 [2024-11-21 00:23:15.355694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:25.175 [2024-11-21 00:23:15.355714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.175 [2024-11-21 00:23:15.355740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.375278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.175 [2024-11-21 00:23:15.375473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:25.175 [2024-11-21 00:23:15.375531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.175 [2024-11-21 00:23:15.375565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.391859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.175 [2024-11-21 00:23:15.392044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:25.175 [2024-11-21 00:23:15.392066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.175 [2024-11-21 00:23:15.392075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.392167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.175 [2024-11-21 00:23:15.392180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:25.175 [2024-11-21 00:23:15.392189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.175 [2024-11-21 00:23:15.392211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.392256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.175 [2024-11-21 00:23:15.392267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:25.175 [2024-11-21 00:23:15.392276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.175 [2024-11-21 00:23:15.392286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.392378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.175 [2024-11-21 00:23:15.392389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:25.175 [2024-11-21 00:23:15.392398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.175 [2024-11-21 00:23:15.392408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.392438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.175 [2024-11-21 00:23:15.392453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:25.175 [2024-11-21 00:23:15.392462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.175 [2024-11-21 00:23:15.392472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.392526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.175 [2024-11-21 00:23:15.392540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:25.175 [2024-11-21 00:23:15.392548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.175 [2024-11-21 00:23:15.392560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.392627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:25.175 [2024-11-21 00:23:15.392641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:25.175 [2024-11-21 00:23:15.392651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:25.175 [2024-11-21 00:23:15.392662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.175 [2024-11-21 00:23:15.392819] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 71.221 ms, result 0 00:34:26.119 00:34:26.119 00:34:26.119 00:23:16 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:26.119 [2024-11-21 00:23:16.258477] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:34:26.119 [2024-11-21 00:23:16.258626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96509 ] 00:34:26.119 [2024-11-21 00:23:16.394089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.119 [2024-11-21 00:23:16.464487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.382 [2024-11-21 00:23:16.613542] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:26.382 [2024-11-21 00:23:16.613630] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:26.382 [2024-11-21 00:23:16.776731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.382 [2024-11-21 00:23:16.776793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:26.382 [2024-11-21 00:23:16.776816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:26.382 [2024-11-21 00:23:16.776825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.382 [2024-11-21 00:23:16.776890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.382 [2024-11-21 00:23:16.776902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:26.382 [2024-11-21 00:23:16.776911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:34:26.382 [2024-11-21 00:23:16.776927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.382 [2024-11-21 00:23:16.776957] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:26.382 [2024-11-21 00:23:16.777243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:26.382 [2024-11-21 00:23:16.777263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.382 [2024-11-21 00:23:16.777272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:26.382 [2024-11-21 00:23:16.777282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:34:26.382 [2024-11-21 00:23:16.777290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.382 [2024-11-21 00:23:16.777635] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:26.382 [2024-11-21 00:23:16.777682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.382 [2024-11-21 00:23:16.777692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:26.382 [2024-11-21 00:23:16.777703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:34:26.382 [2024-11-21 00:23:16.777712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.382 [2024-11-21 00:23:16.777778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.382 [2024-11-21 00:23:16.777796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:26.382 [2024-11-21 00:23:16.777809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:34:26.382 [2024-11-21 00:23:16.777820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.382 [2024-11-21 00:23:16.778083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.382 [2024-11-21 00:23:16.778095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:26.382 [2024-11-21 00:23:16.778105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:34:26.382 [2024-11-21 00:23:16.778119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.382 [2024-11-21 00:23:16.778278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.382 [2024-11-21 00:23:16.778322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:26.382 [2024-11-21 00:23:16.778335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:34:26.382 [2024-11-21 00:23:16.778344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.382 [2024-11-21 00:23:16.778371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.382 [2024-11-21 00:23:16.778381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:26.382 [2024-11-21 00:23:16.778390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:26.382 [2024-11-21 00:23:16.778398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.382 [2024-11-21 00:23:16.778422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:26.382 [2024-11-21 00:23:16.781187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.382 [2024-11-21 00:23:16.781237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:26.382 [2024-11-21 00:23:16.781249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.770 ms 00:34:26.382 [2024-11-21 00:23:16.781258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.382 [2024-11-21 00:23:16.781317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.382 [2024-11-21 00:23:16.781327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:26.382 [2024-11-21 00:23:16.781342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:34:26.382 [2024-11-21 00:23:16.781352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.382 [2024-11-21 00:23:16.781409] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:26.382 [2024-11-21 00:23:16.781440] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:26.382 [2024-11-21 00:23:16.781488] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:26.382 [2024-11-21 00:23:16.781506] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:26.382 [2024-11-21 00:23:16.781617] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:26.382 [2024-11-21 00:23:16.781633] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:26.382 [2024-11-21 00:23:16.781644] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:26.382 [2024-11-21 00:23:16.781656] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:26.382 [2024-11-21 00:23:16.781665] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:26.382 [2024-11-21 00:23:16.781674] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:26.382 [2024-11-21 00:23:16.781687] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:26.383 [2024-11-21 00:23:16.781699] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:26.383 [2024-11-21 00:23:16.781706] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:26.383 [2024-11-21 00:23:16.781714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.383 [2024-11-21 00:23:16.781724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:26.383 [2024-11-21 00:23:16.781736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:34:26.383 [2024-11-21 00:23:16.781747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.383 [2024-11-21 00:23:16.781836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.383 [2024-11-21 00:23:16.781847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:26.383 [2024-11-21 00:23:16.781858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:26.383 [2024-11-21 00:23:16.781868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.383 [2024-11-21 00:23:16.781971] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:26.383 [2024-11-21 00:23:16.781991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:26.383 [2024-11-21 00:23:16.782000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:26.383 [2024-11-21 00:23:16.782008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:26.383 [2024-11-21 00:23:16.782017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:26.383 [2024-11-21 00:23:16.782024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:26.383 [2024-11-21 00:23:16.782033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:26.383 [2024-11-21 00:23:16.782042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:26.383 [2024-11-21 00:23:16.782050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:26.383 [2024-11-21 00:23:16.782060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:26.383 [2024-11-21 00:23:16.782068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:26.383 [2024-11-21 00:23:16.782074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:26.383 [2024-11-21 00:23:16.782081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:26.383 [2024-11-21 00:23:16.782089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:26.383 [2024-11-21 00:23:16.782098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:26.383 [2024-11-21 00:23:16.782109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:26.383 [2024-11-21 00:23:16.782117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:26.383 [2024-11-21 00:23:16.782124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:26.383 [2024-11-21 00:23:16.782131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:26.383 [2024-11-21 00:23:16.782139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:26.383 [2024-11-21 00:23:16.782147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:26.383 [2024-11-21 00:23:16.782153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:26.383 [2024-11-21 00:23:16.782160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:26.383 [2024-11-21 00:23:16.782167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:26.383 [2024-11-21 00:23:16.782174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:26.383 [2024-11-21 00:23:16.782185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:26.383 [2024-11-21 00:23:16.782193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:26.383 [2024-11-21 00:23:16.782200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:26.383 [2024-11-21 00:23:16.782207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:26.383 [2024-11-21 00:23:16.782214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:26.383 [2024-11-21 00:23:16.782220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:26.383 [2024-11-21 00:23:16.782227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:26.383 [2024-11-21 00:23:16.782234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:26.383 [2024-11-21 00:23:16.782240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:26.383 [2024-11-21 00:23:16.782247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:26.383 [2024-11-21 00:23:16.782255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:26.383 [2024-11-21 00:23:16.782262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:26.383 [2024-11-21 00:23:16.782270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:26.383 [2024-11-21 00:23:16.782277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:26.383 [2024-11-21 00:23:16.782283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:26.383 [2024-11-21 00:23:16.782289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:26.383 [2024-11-21 00:23:16.782322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:26.383 [2024-11-21 00:23:16.782329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:26.383 [2024-11-21 00:23:16.782336] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:26.383 [2024-11-21 00:23:16.782344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:26.383 [2024-11-21 00:23:16.782352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:26.383 [2024-11-21 00:23:16.782363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:26.383 [2024-11-21 00:23:16.782373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:26.383 [2024-11-21 00:23:16.782381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:26.383 [2024-11-21 00:23:16.782388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:26.383 [2024-11-21 00:23:16.782396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:26.383 [2024-11-21 00:23:16.782403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:26.383 [2024-11-21 00:23:16.782410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:26.383 [2024-11-21 00:23:16.782420] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:26.383 [2024-11-21 00:23:16.782434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:26.383 [2024-11-21 00:23:16.782447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:26.383 [2024-11-21 00:23:16.782455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:26.383 [2024-11-21 00:23:16.782466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:26.383 [2024-11-21 00:23:16.782473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:26.383 [2024-11-21 00:23:16.782480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:26.383 [2024-11-21 00:23:16.782488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:26.383 [2024-11-21 00:23:16.782495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:26.383 [2024-11-21 00:23:16.782503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:26.383 [2024-11-21 00:23:16.782510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:26.383 [2024-11-21 00:23:16.782519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:26.383 [2024-11-21 00:23:16.782527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:26.383 [2024-11-21 00:23:16.782540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:26.383 [2024-11-21 00:23:16.782548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:26.383 [2024-11-21 00:23:16.782555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:26.383 [2024-11-21 00:23:16.782562] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:26.383 [2024-11-21 00:23:16.782571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:26.383 [2024-11-21 00:23:16.782581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:26.383 [2024-11-21 00:23:16.782588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:26.383 [2024-11-21 00:23:16.782597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:26.383 [2024-11-21 00:23:16.782605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:26.383 [2024-11-21 00:23:16.782613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.383 [2024-11-21 00:23:16.782625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:26.383 [2024-11-21 00:23:16.782634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:34:26.383 [2024-11-21 00:23:16.782642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.646 [2024-11-21 00:23:16.809418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.646 [2024-11-21 00:23:16.809738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:26.646 [2024-11-21 00:23:16.809934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.723 ms 00:34:26.646 [2024-11-21 00:23:16.810007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.646 [2024-11-21 00:23:16.810253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.646 [2024-11-21 00:23:16.810430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:26.646 [2024-11-21 00:23:16.810498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:34:26.646 [2024-11-21 00:23:16.810545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.646 [2024-11-21 00:23:16.826951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.646 [2024-11-21 00:23:16.827120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:26.646 [2024-11-21 00:23:16.827186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.171 ms 00:34:26.646 [2024-11-21 00:23:16.827218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.646 [2024-11-21 00:23:16.827274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.646 [2024-11-21 00:23:16.827324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:26.646 [2024-11-21 00:23:16.827355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:26.646 [2024-11-21 00:23:16.827381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.646 [2024-11-21 00:23:16.827708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.646 [2024-11-21 00:23:16.827852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:26.646 [2024-11-21 00:23:16.827908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:34:26.646 [2024-11-21 00:23:16.827984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.646 [2024-11-21 00:23:16.828169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.646 [2024-11-21 00:23:16.828310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:26.646 [2024-11-21 00:23:16.828378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:34:26.646 [2024-11-21 00:23:16.828402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.646 [2024-11-21 00:23:16.837880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.646 [2024-11-21 00:23:16.838027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:26.646 [2024-11-21 00:23:16.838079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.440 ms 00:34:26.646 [2024-11-21 00:23:16.838102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.646 [2024-11-21 00:23:16.838273] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:26.646 [2024-11-21 00:23:16.838385] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:26.646 [2024-11-21 00:23:16.838518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.646 [2024-11-21 00:23:16.838550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:26.646 [2024-11-21 00:23:16.838574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:34:26.646 [2024-11-21 00:23:16.838595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.646 [2024-11-21 00:23:16.851092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.646 [2024-11-21 00:23:16.851237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:26.646 [2024-11-21 00:23:16.851306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.466 ms 00:34:26.646 [2024-11-21 00:23:16.851332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.646 [2024-11-21 00:23:16.851487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.646 [2024-11-21 00:23:16.851513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:26.646 [2024-11-21 00:23:16.851533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:34:26.646 [2024-11-21 00:23:16.851552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.646 [2024-11-21 00:23:16.851621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.646 [2024-11-21 00:23:16.851701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:26.646 [2024-11-21 00:23:16.851712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:26.646 [2024-11-21 00:23:16.851725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.646 [2024-11-21 00:23:16.852054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.646 [2024-11-21 00:23:16.852071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:26.646 [2024-11-21 00:23:16.852080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:34:26.646 [2024-11-21 00:23:16.852114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.646 [2024-11-21 00:23:16.852134] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:26.646 [2024-11-21 00:23:16.852145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.647 [2024-11-21 00:23:16.852154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:26.647 [2024-11-21 00:23:16.852163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:26.647 [2024-11-21 00:23:16.852175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.647 [2024-11-21 00:23:16.862838] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:26.647 [2024-11-21 00:23:16.863122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.647 [2024-11-21 00:23:16.863139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:26.647 [2024-11-21 00:23:16.863150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.928 ms 00:34:26.647 [2024-11-21 00:23:16.863159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.647 [2024-11-21 00:23:16.865896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.647 [2024-11-21 00:23:16.865937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:26.647 [2024-11-21 00:23:16.865949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.702 ms 00:34:26.647 [2024-11-21 00:23:16.865957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.647 [2024-11-21 00:23:16.866055] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:26.647 [2024-11-21 00:23:16.866814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.647 [2024-11-21 00:23:16.866863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:26.647 [2024-11-21 00:23:16.866886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:34:26.647 [2024-11-21 00:23:16.866906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.647 [2024-11-21 00:23:16.867043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.647 [2024-11-21 00:23:16.867069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:26.647 [2024-11-21 00:23:16.867090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:26.647 [2024-11-21 00:23:16.867156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.647 [2024-11-21 00:23:16.867220] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:26.647 [2024-11-21 00:23:16.867246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.647 [2024-11-21 00:23:16.867328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:26.647 [2024-11-21 00:23:16.867343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:34:26.647 [2024-11-21 00:23:16.867351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.647 [2024-11-21 00:23:16.874561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.647 [2024-11-21 00:23:16.874616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:26.647 [2024-11-21 00:23:16.874628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.171 ms 00:34:26.647 [2024-11-21 00:23:16.874637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.647 [2024-11-21 00:23:16.874762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.647 [2024-11-21 00:23:16.874773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:26.647 [2024-11-21 00:23:16.874782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:34:26.647 [2024-11-21 00:23:16.874791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.647 [2024-11-21 00:23:16.876164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.915 ms, result 0 00:34:28.029  [2024-11-21T00:23:19.384Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-21T00:23:20.320Z] Copying: 22/1024 [MB] (12 MBps) [2024-11-21T00:23:21.259Z] Copying: 35/1024 [MB] (12 MBps) [2024-11-21T00:23:22.193Z] Copying: 45/1024 [MB] (10 MBps) [2024-11-21T00:23:23.126Z] Copying: 58/1024 [MB] (12 MBps) [2024-11-21T00:23:24.506Z] Copying: 69/1024 [MB] (11 MBps) [2024-11-21T00:23:25.441Z] Copying: 81/1024 [MB] (11 MBps) [2024-11-21T00:23:26.377Z] Copying: 92/1024 [MB] (10 MBps) [2024-11-21T00:23:27.312Z] Copying: 109/1024 [MB] (17 MBps) [2024-11-21T00:23:28.250Z] Copying: 120/1024 [MB] (11 MBps) [2024-11-21T00:23:29.185Z] Copying: 132/1024 [MB] (11 MBps) [2024-11-21T00:23:30.126Z] Copying: 143/1024 [MB] (11 MBps) [2024-11-21T00:23:31.137Z] Copying: 158/1024 [MB] (14 MBps) [2024-11-21T00:23:32.517Z] Copying: 169/1024 [MB] (10 MBps) [2024-11-21T00:23:33.089Z] Copying: 180/1024 [MB] (11 MBps) [2024-11-21T00:23:34.490Z] Copying: 192/1024 [MB] (12 MBps) [2024-11-21T00:23:35.431Z] Copying: 204/1024 [MB] (11 MBps) [2024-11-21T00:23:36.375Z] Copying: 231/1024 [MB] (27 MBps) [2024-11-21T00:23:37.317Z] Copying: 242/1024 [MB] (11 MBps) [2024-11-21T00:23:38.252Z] Copying: 253/1024 [MB] (10 MBps) [2024-11-21T00:23:39.186Z] Copying: 264/1024 [MB] (11 MBps) [2024-11-21T00:23:40.130Z] Copying: 276/1024 [MB] (12 MBps) [2024-11-21T00:23:41.510Z] Copying: 288/1024 [MB] (11 MBps) [2024-11-21T00:23:42.077Z] Copying: 298/1024 [MB] (10 MBps) [2024-11-21T00:23:43.454Z] Copying: 309/1024 [MB] (10 MBps) [2024-11-21T00:23:44.383Z] Copying: 323/1024 [MB] (14 MBps) [2024-11-21T00:23:45.317Z] Copying: 335/1024 [MB] (11 MBps) [2024-11-21T00:23:46.256Z] Copying: 347/1024 [MB] (12 MBps) [2024-11-21T00:23:47.197Z] Copying: 371/1024 [MB] (23 MBps) [2024-11-21T00:23:48.137Z] Copying: 386/1024 [MB] (15 MBps) [2024-11-21T00:23:49.082Z] Copying: 397/1024 [MB] (11 MBps) [2024-11-21T00:23:50.459Z] Copying: 416/1024 [MB] (19 MBps) [2024-11-21T00:23:51.397Z] Copying: 427/1024 [MB] (11 MBps) [2024-11-21T00:23:52.336Z] Copying: 439/1024 [MB] (11 MBps) [2024-11-21T00:23:53.280Z] Copying: 451/1024 [MB] (12 MBps) [2024-11-21T00:23:54.216Z] Copying: 462/1024 [MB] (10 MBps) [2024-11-21T00:23:55.154Z] Copying: 473/1024 [MB] (11 MBps) [2024-11-21T00:23:56.091Z] Copying: 486/1024 [MB] (12 MBps) [2024-11-21T00:23:57.477Z] Copying: 497/1024 [MB] (11 MBps) [2024-11-21T00:23:58.414Z] Copying: 508/1024 [MB] (10 MBps) [2024-11-21T00:23:59.353Z] Copying: 519/1024 [MB] (11 MBps) [2024-11-21T00:24:00.371Z] Copying: 531/1024 [MB] (11 MBps) [2024-11-21T00:24:01.311Z] Copying: 543/1024 [MB] (11 MBps) [2024-11-21T00:24:02.247Z] Copying: 564/1024 [MB] (21 MBps) [2024-11-21T00:24:03.182Z] Copying: 577/1024 [MB] (12 MBps) [2024-11-21T00:24:04.121Z] Copying: 589/1024 [MB] (12 MBps) [2024-11-21T00:24:05.504Z] Copying: 601/1024 [MB] (11 MBps) [2024-11-21T00:24:06.448Z] Copying: 617/1024 [MB] (16 MBps) [2024-11-21T00:24:07.388Z] Copying: 629/1024 [MB] (11 MBps) [2024-11-21T00:24:08.330Z] Copying: 639/1024 [MB] (10 MBps) [2024-11-21T00:24:09.271Z] Copying: 650/1024 [MB] (10 MBps) [2024-11-21T00:24:10.211Z] Copying: 661/1024 [MB] (11 MBps) [2024-11-21T00:24:11.154Z] Copying: 672/1024 [MB] (11 MBps) [2024-11-21T00:24:12.096Z] Copying: 683/1024 [MB] (10 MBps) [2024-11-21T00:24:13.482Z] Copying: 694/1024 [MB] (10 MBps) [2024-11-21T00:24:14.423Z] Copying: 705/1024 [MB] (10 MBps) [2024-11-21T00:24:15.367Z] Copying: 718/1024 [MB] (12 MBps) [2024-11-21T00:24:16.304Z] Copying: 731/1024 [MB] (13 MBps) [2024-11-21T00:24:17.243Z] Copying: 742/1024 [MB] (11 MBps) [2024-11-21T00:24:18.181Z] Copying: 753/1024 [MB] (11 MBps) [2024-11-21T00:24:19.119Z] Copying: 765/1024 [MB] (11 MBps) [2024-11-21T00:24:20.511Z] Copying: 777/1024 [MB] (11 MBps) [2024-11-21T00:24:21.078Z] Copying: 788/1024 [MB] (11 MBps) [2024-11-21T00:24:22.457Z] Copying: 799/1024 [MB] (11 MBps) [2024-11-21T00:24:23.395Z] Copying: 811/1024 [MB] (11 MBps) [2024-11-21T00:24:24.331Z] Copying: 822/1024 [MB] (11 MBps) [2024-11-21T00:24:25.268Z] Copying: 834/1024 [MB] (11 MBps) [2024-11-21T00:24:26.211Z] Copying: 845/1024 [MB] (11 MBps) [2024-11-21T00:24:27.152Z] Copying: 863/1024 [MB] (17 MBps) [2024-11-21T00:24:28.096Z] Copying: 874/1024 [MB] (11 MBps) [2024-11-21T00:24:29.099Z] Copying: 885/1024 [MB] (10 MBps) [2024-11-21T00:24:30.472Z] Copying: 896/1024 [MB] (10 MBps) [2024-11-21T00:24:31.406Z] Copying: 908/1024 [MB] (11 MBps) [2024-11-21T00:24:32.341Z] Copying: 920/1024 [MB] (11 MBps) [2024-11-21T00:24:33.277Z] Copying: 932/1024 [MB] (12 MBps) [2024-11-21T00:24:34.218Z] Copying: 945/1024 [MB] (12 MBps) [2024-11-21T00:24:35.153Z] Copying: 963/1024 [MB] (18 MBps) [2024-11-21T00:24:36.093Z] Copying: 975/1024 [MB] (11 MBps) [2024-11-21T00:24:37.477Z] Copying: 987/1024 [MB] (11 MBps) [2024-11-21T00:24:38.411Z] Copying: 1003/1024 [MB] (15 MBps) [2024-11-21T00:24:38.978Z] Copying: 1014/1024 [MB] (11 MBps) [2024-11-21T00:24:39.238Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-21 00:24:39.079137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.817 [2024-11-21 00:24:39.079260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:48.817 [2024-11-21 00:24:39.079325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:48.817 [2024-11-21 00:24:39.079349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.817 [2024-11-21 00:24:39.079402] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:48.817 [2024-11-21 00:24:39.080332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.817 [2024-11-21 00:24:39.080357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:48.817 [2024-11-21 00:24:39.080366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:35:48.817 [2024-11-21 00:24:39.080373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.817 [2024-11-21 00:24:39.080572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.817 [2024-11-21 00:24:39.080584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:48.817 [2024-11-21 00:24:39.080591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:35:48.817 [2024-11-21 00:24:39.080598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.817 [2024-11-21 00:24:39.080627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.817 [2024-11-21 00:24:39.080634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:48.817 [2024-11-21 00:24:39.080640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:48.817 [2024-11-21 00:24:39.080647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.817 [2024-11-21 00:24:39.080696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.817 [2024-11-21 00:24:39.080703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:48.817 [2024-11-21 00:24:39.080712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:35:48.817 [2024-11-21 00:24:39.080720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.817 [2024-11-21 00:24:39.080733] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:48.817 [2024-11-21 00:24:39.080743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:35:48.817 [2024-11-21 00:24:39.080751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.080999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.081005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:48.817 [2024-11-21 00:24:39.081011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:48.818 [2024-11-21 00:24:39.081371] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:48.818 [2024-11-21 00:24:39.081381] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ebebdb1-f50a-4a1e-8f04-426f1bc9f828 00:35:48.818 [2024-11-21 00:24:39.081387] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:35:48.818 [2024-11-21 00:24:39.081393] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4128 00:35:48.818 [2024-11-21 00:24:39.081400] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4096 00:35:48.818 [2024-11-21 00:24:39.081407] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0078 00:35:48.818 [2024-11-21 00:24:39.081413] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:48.818 [2024-11-21 00:24:39.081418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:48.818 [2024-11-21 00:24:39.081426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:48.818 [2024-11-21 00:24:39.081432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:48.818 [2024-11-21 00:24:39.081437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:48.818 [2024-11-21 00:24:39.081443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.818 [2024-11-21 00:24:39.081449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:48.818 [2024-11-21 00:24:39.081456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:35:48.818 [2024-11-21 00:24:39.081462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.818 [2024-11-21 00:24:39.083211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.818 [2024-11-21 00:24:39.083234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:48.818 [2024-11-21 00:24:39.083248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:35:48.818 [2024-11-21 00:24:39.083256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.818 [2024-11-21 00:24:39.083367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:48.818 [2024-11-21 00:24:39.083375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:48.818 [2024-11-21 00:24:39.083382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:35:48.818 [2024-11-21 00:24:39.083389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.818 [2024-11-21 00:24:39.089317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:48.818 [2024-11-21 00:24:39.089339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:48.818 [2024-11-21 00:24:39.089351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:48.818 [2024-11-21 00:24:39.089357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.818 [2024-11-21 00:24:39.089410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:48.818 [2024-11-21 00:24:39.089418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:48.818 [2024-11-21 00:24:39.089425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:48.818 [2024-11-21 00:24:39.089431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.818 [2024-11-21 00:24:39.089479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:48.818 [2024-11-21 00:24:39.089487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:48.818 [2024-11-21 00:24:39.089493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:48.818 [2024-11-21 00:24:39.089502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.818 [2024-11-21 00:24:39.089517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:48.818 [2024-11-21 00:24:39.089524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:48.818 [2024-11-21 00:24:39.089530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:48.818 [2024-11-21 00:24:39.089536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.818 [2024-11-21 00:24:39.100094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:48.818 [2024-11-21 00:24:39.100285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:48.818 [2024-11-21 00:24:39.100313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:48.818 [2024-11-21 00:24:39.100324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.818 [2024-11-21 00:24:39.109238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:48.819 [2024-11-21 00:24:39.109270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:48.819 [2024-11-21 00:24:39.109285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:48.819 [2024-11-21 00:24:39.109291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.819 [2024-11-21 00:24:39.109526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:48.819 [2024-11-21 00:24:39.109536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:48.819 [2024-11-21 00:24:39.109544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:48.819 [2024-11-21 00:24:39.109550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.819 [2024-11-21 00:24:39.109574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:48.819 [2024-11-21 00:24:39.109582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:48.819 [2024-11-21 00:24:39.109588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:48.819 [2024-11-21 00:24:39.109595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.819 [2024-11-21 00:24:39.109639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:48.819 [2024-11-21 00:24:39.109648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:48.819 [2024-11-21 00:24:39.109655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:48.819 [2024-11-21 00:24:39.109661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.819 [2024-11-21 00:24:39.109686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:48.819 [2024-11-21 00:24:39.109694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:48.819 [2024-11-21 00:24:39.109701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:48.819 [2024-11-21 00:24:39.109707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.819 [2024-11-21 00:24:39.109745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:48.819 [2024-11-21 00:24:39.109753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:48.819 [2024-11-21 00:24:39.109760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:48.819 [2024-11-21 00:24:39.109767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.819 [2024-11-21 00:24:39.109806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:48.819 [2024-11-21 00:24:39.109814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:48.819 [2024-11-21 00:24:39.109822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:48.819 [2024-11-21 00:24:39.109828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:48.819 [2024-11-21 00:24:39.109940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 30.804 ms, result 0 00:35:49.079 00:35:49.079 00:35:49.079 00:24:39 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:51.624 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 93879 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93879 ']' 00:35:51.624 Process with pid 93879 is not found 00:35:51.624 Remove shared memory files 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93879 00:35:51.624 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (93879) - No such process 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 93879 is not found' 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_2ebebdb1-f50a-4a1e-8f04-426f1bc9f828_band_md /dev/hugepages/ftl_2ebebdb1-f50a-4a1e-8f04-426f1bc9f828_l2p_l1 /dev/hugepages/ftl_2ebebdb1-f50a-4a1e-8f04-426f1bc9f828_l2p_l2 /dev/hugepages/ftl_2ebebdb1-f50a-4a1e-8f04-426f1bc9f828_l2p_l2_ctx /dev/hugepages/ftl_2ebebdb1-f50a-4a1e-8f04-426f1bc9f828_nvc_md /dev/hugepages/ftl_2ebebdb1-f50a-4a1e-8f04-426f1bc9f828_p2l_pool /dev/hugepages/ftl_2ebebdb1-f50a-4a1e-8f04-426f1bc9f828_sb /dev/hugepages/ftl_2ebebdb1-f50a-4a1e-8f04-426f1bc9f828_sb_shm /dev/hugepages/ftl_2ebebdb1-f50a-4a1e-8f04-426f1bc9f828_trim_bitmap /dev/hugepages/ftl_2ebebdb1-f50a-4a1e-8f04-426f1bc9f828_trim_log /dev/hugepages/ftl_2ebebdb1-f50a-4a1e-8f04-426f1bc9f828_trim_md /dev/hugepages/ftl_2ebebdb1-f50a-4a1e-8f04-426f1bc9f828_vmap 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:35:51.624 ************************************ 00:35:51.624 END TEST ftl_restore_fast 00:35:51.624 ************************************ 00:35:51.624 00:35:51.624 real 5m44.043s 00:35:51.624 user 5m33.001s 00:35:51.624 sys 0m10.809s 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:51.624 00:24:41 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:35:51.624 00:24:41 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:51.624 00:24:41 ftl -- ftl/ftl.sh@14 -- # killprocess 83755 00:35:51.624 00:24:41 ftl -- common/autotest_common.sh@950 -- # '[' -z 83755 ']' 00:35:51.624 00:24:41 ftl -- common/autotest_common.sh@954 -- # kill -0 83755 00:35:51.624 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (83755) - No such process 00:35:51.624 Process with pid 83755 is not found 00:35:51.624 00:24:41 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 83755 is not found' 00:35:51.624 00:24:41 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:51.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.624 00:24:41 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97381 00:35:51.624 00:24:41 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97381 00:35:51.624 00:24:41 ftl -- common/autotest_common.sh@831 -- # '[' -z 97381 ']' 00:35:51.624 00:24:41 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.624 00:24:41 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:51.624 00:24:41 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:51.624 00:24:41 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.624 00:24:41 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:51.624 00:24:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:51.624 [2024-11-21 00:24:41.683571] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:51.624 [2024-11-21 00:24:41.683704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97381 ] 00:35:51.624 [2024-11-21 00:24:41.815413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.624 [2024-11-21 00:24:41.856024] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:52.191 00:24:42 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:52.191 00:24:42 ftl -- common/autotest_common.sh@864 -- # return 0 00:35:52.191 00:24:42 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:52.450 nvme0n1 00:35:52.450 00:24:42 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:52.450 00:24:42 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:52.450 00:24:42 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:52.708 00:24:42 ftl -- ftl/common.sh@28 -- # stores=bd2048df-753e-42e1-8c7d-6b655eeb3c29 00:35:52.708 00:24:42 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:52.708 00:24:42 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bd2048df-753e-42e1-8c7d-6b655eeb3c29 00:35:52.967 00:24:43 ftl -- ftl/ftl.sh@23 -- # killprocess 97381 00:35:52.967 00:24:43 ftl -- common/autotest_common.sh@950 -- # '[' -z 97381 ']' 00:35:52.967 00:24:43 ftl -- common/autotest_common.sh@954 -- # kill -0 97381 00:35:52.967 00:24:43 ftl -- common/autotest_common.sh@955 -- # uname 00:35:52.967 00:24:43 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:52.967 00:24:43 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97381 00:35:52.967 killing process with pid 97381 00:35:52.967 00:24:43 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:52.967 00:24:43 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:52.967 00:24:43 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97381' 00:35:52.967 00:24:43 ftl -- common/autotest_common.sh@969 -- # kill 97381 00:35:52.967 00:24:43 ftl -- common/autotest_common.sh@974 -- # wait 97381 00:35:53.228 00:24:43 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:53.488 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:53.488 Waiting for block devices as requested 00:35:53.488 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:53.488 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:53.747 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:53.747 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:59.034 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:59.034 00:24:49 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:35:59.034 00:24:49 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:59.034 Remove shared memory files 00:35:59.034 00:24:49 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:35:59.034 00:24:49 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:35:59.034 00:24:49 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:35:59.034 00:24:49 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:59.034 00:24:49 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:35:59.034 ************************************ 00:35:59.034 END TEST ftl 00:35:59.034 ************************************ 00:35:59.034 00:35:59.034 real 20m50.330s 00:35:59.034 user 22m38.762s 00:35:59.034 sys 1m18.435s 00:35:59.034 00:24:49 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:59.034 00:24:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:59.034 00:24:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:59.034 00:24:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:59.034 00:24:49 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:35:59.034 00:24:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:59.034 00:24:49 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:35:59.034 00:24:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:59.034 00:24:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:59.034 00:24:49 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:35:59.034 00:24:49 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:35:59.034 00:24:49 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:35:59.034 00:24:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:59.034 00:24:49 -- common/autotest_common.sh@10 -- # set +x 00:35:59.034 00:24:49 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:35:59.034 00:24:49 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:35:59.034 00:24:49 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:35:59.034 00:24:49 -- common/autotest_common.sh@10 -- # set +x 00:36:00.416 INFO: APP EXITING 00:36:00.416 INFO: killing all VMs 00:36:00.416 INFO: killing vhost app 00:36:00.416 INFO: EXIT DONE 00:36:00.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:00.939 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:00.939 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:00.939 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:01.200 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:01.463 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:01.724 Cleaning 00:36:01.724 Removing: /var/run/dpdk/spdk0/config 00:36:01.724 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:01.724 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:01.724 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:01.724 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:01.986 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:01.986 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:01.986 Removing: /var/run/dpdk/spdk0 00:36:01.986 Removing: /var/run/dpdk/spdk_pid69298 00:36:01.986 Removing: /var/run/dpdk/spdk_pid69450 00:36:01.986 Removing: /var/run/dpdk/spdk_pid69652 00:36:01.986 Removing: /var/run/dpdk/spdk_pid69739 00:36:01.986 Removing: /var/run/dpdk/spdk_pid69762 00:36:01.986 Removing: /var/run/dpdk/spdk_pid69874 00:36:01.986 Removing: /var/run/dpdk/spdk_pid69886 00:36:01.986 Removing: /var/run/dpdk/spdk_pid70069 00:36:01.986 Removing: /var/run/dpdk/spdk_pid70142 00:36:01.986 Removing: /var/run/dpdk/spdk_pid70222 00:36:01.986 Removing: /var/run/dpdk/spdk_pid70316 00:36:01.986 Removing: /var/run/dpdk/spdk_pid70397 00:36:01.986 Removing: /var/run/dpdk/spdk_pid70431 00:36:01.986 Removing: /var/run/dpdk/spdk_pid70467 00:36:01.986 Removing: /var/run/dpdk/spdk_pid70538 00:36:01.986 Removing: /var/run/dpdk/spdk_pid70633 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71058 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71100 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71146 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71157 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71215 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71231 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71289 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71305 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71347 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71365 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71407 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71425 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71552 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71583 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71672 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71833 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71895 00:36:01.986 Removing: /var/run/dpdk/spdk_pid71926 00:36:01.986 Removing: /var/run/dpdk/spdk_pid72336 00:36:01.986 Removing: /var/run/dpdk/spdk_pid72429 00:36:01.986 Removing: /var/run/dpdk/spdk_pid72527 00:36:01.986 Removing: /var/run/dpdk/spdk_pid72571 00:36:01.986 Removing: /var/run/dpdk/spdk_pid72591 00:36:01.986 Removing: /var/run/dpdk/spdk_pid72675 00:36:01.986 Removing: /var/run/dpdk/spdk_pid73281 00:36:01.986 Removing: /var/run/dpdk/spdk_pid73307 00:36:01.986 Removing: /var/run/dpdk/spdk_pid73758 00:36:01.986 Removing: /var/run/dpdk/spdk_pid73845 00:36:01.986 Removing: /var/run/dpdk/spdk_pid73948 00:36:01.986 Removing: /var/run/dpdk/spdk_pid73985 00:36:01.986 Removing: /var/run/dpdk/spdk_pid74016 00:36:01.986 Removing: /var/run/dpdk/spdk_pid74036 00:36:01.986 Removing: /var/run/dpdk/spdk_pid75849 00:36:01.986 Removing: /var/run/dpdk/spdk_pid75969 00:36:01.986 Removing: /var/run/dpdk/spdk_pid75973 00:36:01.986 Removing: /var/run/dpdk/spdk_pid75991 00:36:01.986 Removing: /var/run/dpdk/spdk_pid76031 00:36:01.986 Removing: /var/run/dpdk/spdk_pid76035 00:36:01.986 Removing: /var/run/dpdk/spdk_pid76047 00:36:01.986 Removing: /var/run/dpdk/spdk_pid76086 00:36:01.986 Removing: /var/run/dpdk/spdk_pid76090 00:36:01.986 Removing: /var/run/dpdk/spdk_pid76102 00:36:01.986 Removing: /var/run/dpdk/spdk_pid76141 00:36:01.986 Removing: /var/run/dpdk/spdk_pid76145 00:36:01.986 Removing: /var/run/dpdk/spdk_pid76157 00:36:01.986 Removing: /var/run/dpdk/spdk_pid77521 00:36:01.986 Removing: /var/run/dpdk/spdk_pid77607 00:36:01.986 Removing: /var/run/dpdk/spdk_pid79009 00:36:01.986 Removing: /var/run/dpdk/spdk_pid80368 00:36:01.986 Removing: /var/run/dpdk/spdk_pid80436 00:36:01.986 Removing: /var/run/dpdk/spdk_pid80496 00:36:01.986 Removing: /var/run/dpdk/spdk_pid80550 00:36:01.986 Removing: /var/run/dpdk/spdk_pid80629 00:36:01.986 Removing: /var/run/dpdk/spdk_pid80692 00:36:01.986 Removing: /var/run/dpdk/spdk_pid80831 00:36:01.986 Removing: /var/run/dpdk/spdk_pid81179 00:36:01.986 Removing: /var/run/dpdk/spdk_pid81199 00:36:01.986 Removing: /var/run/dpdk/spdk_pid81635 00:36:01.986 Removing: /var/run/dpdk/spdk_pid81808 00:36:01.986 Removing: /var/run/dpdk/spdk_pid81904 00:36:01.986 Removing: /var/run/dpdk/spdk_pid82003 00:36:01.986 Removing: /var/run/dpdk/spdk_pid82046 00:36:01.986 Removing: /var/run/dpdk/spdk_pid82066 00:36:01.986 Removing: /var/run/dpdk/spdk_pid82359 00:36:01.986 Removing: /var/run/dpdk/spdk_pid82402 00:36:01.986 Removing: /var/run/dpdk/spdk_pid82448 00:36:01.986 Removing: /var/run/dpdk/spdk_pid82818 00:36:01.986 Removing: /var/run/dpdk/spdk_pid82962 00:36:01.986 Removing: /var/run/dpdk/spdk_pid83755 00:36:01.986 Removing: /var/run/dpdk/spdk_pid83875 00:36:01.986 Removing: /var/run/dpdk/spdk_pid84030 00:36:01.986 Removing: /var/run/dpdk/spdk_pid84116 00:36:01.986 Removing: /var/run/dpdk/spdk_pid84424 00:36:01.986 Removing: /var/run/dpdk/spdk_pid84677 00:36:01.986 Removing: /var/run/dpdk/spdk_pid85023 00:36:01.986 Removing: /var/run/dpdk/spdk_pid85182 00:36:01.986 Removing: /var/run/dpdk/spdk_pid85379 00:36:01.986 Removing: /var/run/dpdk/spdk_pid85420 00:36:02.248 Removing: /var/run/dpdk/spdk_pid85652 00:36:02.248 Removing: /var/run/dpdk/spdk_pid85668 00:36:02.248 Removing: /var/run/dpdk/spdk_pid85716 00:36:02.248 Removing: /var/run/dpdk/spdk_pid86032 00:36:02.248 Removing: /var/run/dpdk/spdk_pid86250 00:36:02.248 Removing: /var/run/dpdk/spdk_pid87077 00:36:02.248 Removing: /var/run/dpdk/spdk_pid87959 00:36:02.248 Removing: /var/run/dpdk/spdk_pid88824 00:36:02.248 Removing: /var/run/dpdk/spdk_pid89807 00:36:02.248 Removing: /var/run/dpdk/spdk_pid89943 00:36:02.248 Removing: /var/run/dpdk/spdk_pid90015 00:36:02.248 Removing: /var/run/dpdk/spdk_pid90375 00:36:02.248 Removing: /var/run/dpdk/spdk_pid90430 00:36:02.248 Removing: /var/run/dpdk/spdk_pid91335 00:36:02.248 Removing: /var/run/dpdk/spdk_pid91948 00:36:02.248 Removing: /var/run/dpdk/spdk_pid92941 00:36:02.248 Removing: /var/run/dpdk/spdk_pid93061 00:36:02.248 Removing: /var/run/dpdk/spdk_pid93095 00:36:02.248 Removing: /var/run/dpdk/spdk_pid93148 00:36:02.248 Removing: /var/run/dpdk/spdk_pid93199 00:36:02.248 Removing: /var/run/dpdk/spdk_pid93252 00:36:02.248 Removing: /var/run/dpdk/spdk_pid93428 00:36:02.248 Removing: /var/run/dpdk/spdk_pid93497 00:36:02.248 Removing: /var/run/dpdk/spdk_pid93553 00:36:02.248 Removing: /var/run/dpdk/spdk_pid93607 00:36:02.248 Removing: /var/run/dpdk/spdk_pid93642 00:36:02.248 Removing: /var/run/dpdk/spdk_pid93714 00:36:02.248 Removing: /var/run/dpdk/spdk_pid93879 00:36:02.248 Removing: /var/run/dpdk/spdk_pid94088 00:36:02.248 Removing: /var/run/dpdk/spdk_pid94940 00:36:02.248 Removing: /var/run/dpdk/spdk_pid95667 00:36:02.248 Removing: /var/run/dpdk/spdk_pid96509 00:36:02.248 Removing: /var/run/dpdk/spdk_pid97381 00:36:02.248 Clean 00:36:02.248 00:24:52 -- common/autotest_common.sh@1451 -- # return 0 00:36:02.248 00:24:52 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:36:02.248 00:24:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:02.248 00:24:52 -- common/autotest_common.sh@10 -- # set +x 00:36:02.248 00:24:52 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:36:02.248 00:24:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:02.248 00:24:52 -- common/autotest_common.sh@10 -- # set +x 00:36:02.248 00:24:52 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:02.248 00:24:52 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:02.248 00:24:52 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:02.248 00:24:52 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:36:02.248 00:24:52 -- spdk/autotest.sh@394 -- # hostname 00:36:02.248 00:24:52 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:02.513 geninfo: WARNING: invalid characters removed from testname! 00:36:29.096 00:25:17 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:31.125 00:25:21 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:33.041 00:25:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:35.590 00:25:25 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:38.138 00:25:28 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:40.683 00:25:30 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:42.602 00:25:32 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:42.602 00:25:32 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:36:42.602 00:25:32 -- common/autotest_common.sh@1681 -- $ lcov --version 00:36:42.602 00:25:32 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:36:42.602 00:25:32 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:36:42.602 00:25:32 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:36:42.602 00:25:32 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:36:42.602 00:25:32 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:36:42.602 00:25:32 -- scripts/common.sh@336 -- $ IFS=.-: 00:36:42.602 00:25:32 -- scripts/common.sh@336 -- $ read -ra ver1 00:36:42.602 00:25:32 -- scripts/common.sh@337 -- $ IFS=.-: 00:36:42.602 00:25:32 -- scripts/common.sh@337 -- $ read -ra ver2 00:36:42.602 00:25:32 -- scripts/common.sh@338 -- $ local 'op=<' 00:36:42.602 00:25:32 -- scripts/common.sh@340 -- $ ver1_l=2 00:36:42.602 00:25:32 -- scripts/common.sh@341 -- $ ver2_l=1 00:36:42.602 00:25:32 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:36:42.602 00:25:32 -- scripts/common.sh@344 -- $ case "$op" in 00:36:42.602 00:25:32 -- scripts/common.sh@345 -- $ : 1 00:36:42.602 00:25:32 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:36:42.602 00:25:32 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:42.602 00:25:32 -- scripts/common.sh@365 -- $ decimal 1 00:36:42.602 00:25:32 -- scripts/common.sh@353 -- $ local d=1 00:36:42.602 00:25:32 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:36:42.602 00:25:32 -- scripts/common.sh@355 -- $ echo 1 00:36:42.602 00:25:32 -- scripts/common.sh@365 -- $ ver1[v]=1 00:36:42.602 00:25:32 -- scripts/common.sh@366 -- $ decimal 2 00:36:42.602 00:25:32 -- scripts/common.sh@353 -- $ local d=2 00:36:42.602 00:25:32 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:36:42.602 00:25:32 -- scripts/common.sh@355 -- $ echo 2 00:36:42.602 00:25:32 -- scripts/common.sh@366 -- $ ver2[v]=2 00:36:42.602 00:25:32 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:36:42.602 00:25:32 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:36:42.602 00:25:32 -- scripts/common.sh@368 -- $ return 0 00:36:42.602 00:25:32 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:42.602 00:25:32 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:36:42.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.602 --rc genhtml_branch_coverage=1 00:36:42.602 --rc genhtml_function_coverage=1 00:36:42.602 --rc genhtml_legend=1 00:36:42.602 --rc geninfo_all_blocks=1 00:36:42.602 --rc geninfo_unexecuted_blocks=1 00:36:42.602 00:36:42.602 ' 00:36:42.602 00:25:32 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:36:42.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.602 --rc genhtml_branch_coverage=1 00:36:42.602 --rc genhtml_function_coverage=1 00:36:42.602 --rc genhtml_legend=1 00:36:42.602 --rc geninfo_all_blocks=1 00:36:42.602 --rc geninfo_unexecuted_blocks=1 00:36:42.602 00:36:42.602 ' 00:36:42.602 00:25:32 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:36:42.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.602 --rc genhtml_branch_coverage=1 00:36:42.602 --rc genhtml_function_coverage=1 00:36:42.602 --rc genhtml_legend=1 00:36:42.602 --rc geninfo_all_blocks=1 00:36:42.602 --rc geninfo_unexecuted_blocks=1 00:36:42.602 00:36:42.602 ' 00:36:42.602 00:25:32 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:36:42.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.602 --rc genhtml_branch_coverage=1 00:36:42.602 --rc genhtml_function_coverage=1 00:36:42.602 --rc genhtml_legend=1 00:36:42.602 --rc geninfo_all_blocks=1 00:36:42.602 --rc geninfo_unexecuted_blocks=1 00:36:42.602 00:36:42.602 ' 00:36:42.602 00:25:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:42.602 00:25:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:36:42.602 00:25:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:42.602 00:25:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:42.602 00:25:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:42.602 00:25:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.602 00:25:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.602 00:25:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.602 00:25:32 -- paths/export.sh@5 -- $ export PATH 00:36:42.602 00:25:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.602 00:25:32 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:36:42.602 00:25:32 -- common/autobuild_common.sh@479 -- $ date +%s 00:36:42.602 00:25:32 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732148732.XXXXXX 00:36:42.602 00:25:32 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732148732.ocG5bp 00:36:42.602 00:25:32 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:36:42.602 00:25:32 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:36:42.602 00:25:32 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:36:42.602 00:25:32 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:36:42.602 00:25:32 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:36:42.602 00:25:32 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:36:42.603 00:25:32 -- common/autobuild_common.sh@495 -- $ get_config_params 00:36:42.603 00:25:32 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:36:42.603 00:25:32 -- common/autotest_common.sh@10 -- $ set +x 00:36:42.603 00:25:32 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:36:42.603 00:25:32 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:36:42.603 00:25:32 -- pm/common@17 -- $ local monitor 00:36:42.603 00:25:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:42.603 00:25:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:42.603 00:25:32 -- pm/common@25 -- $ sleep 1 00:36:42.603 00:25:32 -- pm/common@21 -- $ date +%s 00:36:42.603 00:25:32 -- pm/common@21 -- $ date +%s 00:36:42.603 00:25:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732148732 00:36:42.603 00:25:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732148732 00:36:42.603 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732148732_collect-vmstat.pm.log 00:36:42.603 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732148732_collect-cpu-load.pm.log 00:36:43.546 00:25:33 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:36:43.546 00:25:33 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:36:43.546 00:25:33 -- spdk/autopackage.sh@14 -- $ timing_finish 00:36:43.546 00:25:33 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:43.546 00:25:33 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:43.546 00:25:33 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:43.806 00:25:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:43.806 00:25:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:43.806 00:25:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:43.806 00:25:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:43.806 00:25:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:36:43.806 00:25:33 -- pm/common@44 -- $ pid=99056 00:36:43.806 00:25:33 -- pm/common@50 -- $ kill -TERM 99056 00:36:43.806 00:25:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:43.806 00:25:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:36:43.806 00:25:33 -- pm/common@44 -- $ pid=99057 00:36:43.806 00:25:33 -- pm/common@50 -- $ kill -TERM 99057 00:36:43.806 + [[ -n 5763 ]] 00:36:43.806 + sudo kill 5763 00:36:43.818 [Pipeline] } 00:36:43.832 [Pipeline] // timeout 00:36:43.837 [Pipeline] } 00:36:43.850 [Pipeline] // stage 00:36:43.855 [Pipeline] } 00:36:43.870 [Pipeline] // catchError 00:36:43.878 [Pipeline] stage 00:36:43.879 [Pipeline] { (Stop VM) 00:36:43.890 [Pipeline] sh 00:36:44.178 + vagrant halt 00:36:47.486 ==> default: Halting domain... 00:36:52.812 [Pipeline] sh 00:36:53.099 + vagrant destroy -f 00:36:55.658 ==> default: Removing domain... 00:36:56.244 [Pipeline] sh 00:36:56.530 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:36:56.541 [Pipeline] } 00:36:56.560 [Pipeline] // stage 00:36:56.566 [Pipeline] } 00:36:56.582 [Pipeline] // dir 00:36:56.589 [Pipeline] } 00:36:56.605 [Pipeline] // wrap 00:36:56.613 [Pipeline] } 00:36:56.627 [Pipeline] // catchError 00:36:56.638 [Pipeline] stage 00:36:56.640 [Pipeline] { (Epilogue) 00:36:56.655 [Pipeline] sh 00:36:56.942 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:02.256 [Pipeline] catchError 00:37:02.258 [Pipeline] { 00:37:02.272 [Pipeline] sh 00:37:02.616 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:02.877 Artifacts sizes are good 00:37:02.888 [Pipeline] } 00:37:02.903 [Pipeline] // catchError 00:37:02.915 [Pipeline] archiveArtifacts 00:37:02.922 Archiving artifacts 00:37:03.029 [Pipeline] cleanWs 00:37:03.042 [WS-CLEANUP] Deleting project workspace... 00:37:03.042 [WS-CLEANUP] Deferred wipeout is used... 00:37:03.050 [WS-CLEANUP] done 00:37:03.051 [Pipeline] } 00:37:03.067 [Pipeline] // stage 00:37:03.072 [Pipeline] } 00:37:03.085 [Pipeline] // node 00:37:03.091 [Pipeline] End of Pipeline 00:37:03.132 Finished: SUCCESS